Test Report: KVM_Linux_crio 19008

                    
                      a618818e4540e3b7209a51bdf46a3b81113887e7:2024-06-03:34738
                    
                

Test fail (30/318)

Order failed test Duration
30 TestAddons/parallel/Ingress 152.29
32 TestAddons/parallel/MetricsServer 349.03
45 TestAddons/StoppedEnableDisable 154.34
164 TestMultiControlPlane/serial/StopSecondaryNode 141.92
166 TestMultiControlPlane/serial/RestartSecondaryNode 62.48
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.75
169 TestMultiControlPlane/serial/DeleteSecondaryNode 19.4
171 TestMultiControlPlane/serial/StopCluster 172.4
231 TestMultiNode/serial/RestartKeepsNodes 311.88
233 TestMultiNode/serial/StopMultiNode 141.19
240 TestPreload 265.99
248 TestKubernetesUpgrade 390.11
327 TestStartStop/group/old-k8s-version/serial/FirstStart 273.52
345 TestStartStop/group/embed-certs/serial/Stop 139.1
349 TestStartStop/group/no-preload/serial/Stop 138.99
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.04
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
353 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 83.52
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/old-k8s-version/serial/SecondStart 753.07
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.94
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 546.09
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.23
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.46
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 345.38
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 426.84
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 316.61
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 146.72
x
+
TestAddons/parallel/Ingress (152.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-926744 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-926744 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-926744 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2491ce04-859e-4df5-a082-1f95450cf4b1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2491ce04-859e-4df5-a082-1f95450cf4b1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003856067s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-926744 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.65433633s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-926744 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.188
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-926744 addons disable ingress-dns --alsologtostderr -v=1: (1.601593944s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-926744 addons disable ingress --alsologtostderr -v=1: (7.945306698s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-926744 -n addons-926744
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-926744 logs -n 25: (1.256412143s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC |                     |
	|         | -p download-only-238243                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-238243                                                                     | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-730853                                                                     | download-only-730853 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-238243                                                                     | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-373654 | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | binary-mirror-373654                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-373654                                                                     | binary-mirror-373654 | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC | 03 Jun 24 10:39 UTC |
	| addons  | enable dashboard -p                                                                         | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-926744 --wait=true                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC | 03 Jun 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | -p addons-926744                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-926744 ip                                                                            | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-926744 ssh curl -s                                                                   | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | -p addons-926744                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-926744 ssh cat                                                                       | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | /opt/local-path-provisioner/pvc-c91d9397-ba00-4758-81d9-86e4e7e60cde_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:43 UTC | 03 Jun 24 10:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:43 UTC | 03 Jun 24 10:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-926744 ip                                                                            | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:39:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:39:00.680880   15688 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:39:00.681090   15688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:39:00.681098   15688 out.go:304] Setting ErrFile to fd 2...
	I0603 10:39:00.681102   15688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:39:00.681270   15688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:39:00.681788   15688 out.go:298] Setting JSON to false
	I0603 10:39:00.682562   15688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1286,"bootTime":1717409855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:39:00.682614   15688 start.go:139] virtualization: kvm guest
	I0603 10:39:00.684530   15688 out.go:177] * [addons-926744] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:39:00.685815   15688 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:39:00.685810   15688 notify.go:220] Checking for updates...
	I0603 10:39:00.687177   15688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:39:00.688398   15688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:39:00.689627   15688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:00.691446   15688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:39:00.692818   15688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:39:00.694278   15688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:39:00.724066   15688 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 10:39:00.725240   15688 start.go:297] selected driver: kvm2
	I0603 10:39:00.725261   15688 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:39:00.725275   15688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:39:00.725948   15688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:39:00.726022   15688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:39:00.739965   15688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:39:00.740003   15688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:39:00.740180   15688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:39:00.740228   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:00.740239   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:00.740250   15688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 10:39:00.740291   15688 start.go:340] cluster config:
	{Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:39:00.740376   15688 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:39:00.742007   15688 out.go:177] * Starting "addons-926744" primary control-plane node in "addons-926744" cluster
	I0603 10:39:00.743216   15688 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:39:00.743243   15688 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 10:39:00.743250   15688 cache.go:56] Caching tarball of preloaded images
	I0603 10:39:00.743338   15688 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:39:00.743348   15688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:39:00.743606   15688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json ...
	I0603 10:39:00.743624   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json: {Name:mk9141239b37afe7f92d08173cacd42a85c219d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:00.743740   15688 start.go:360] acquireMachinesLock for addons-926744: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:39:00.743778   15688 start.go:364] duration metric: took 26.149µs to acquireMachinesLock for "addons-926744"
	I0603 10:39:00.743793   15688 start.go:93] Provisioning new machine with config: &{Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:39:00.743843   15688 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 10:39:00.745351   15688 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 10:39:00.745461   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:00.745501   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:00.758808   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I0603 10:39:00.759203   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:00.759710   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:00.759729   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:00.760030   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:00.760257   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:00.760393   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:00.760554   15688 start.go:159] libmachine.API.Create for "addons-926744" (driver="kvm2")
	I0603 10:39:00.760577   15688 client.go:168] LocalClient.Create starting
	I0603 10:39:00.760607   15688 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:39:00.930483   15688 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:39:01.301606   15688 main.go:141] libmachine: Running pre-create checks...
	I0603 10:39:01.301633   15688 main.go:141] libmachine: (addons-926744) Calling .PreCreateCheck
	I0603 10:39:01.302136   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:01.302543   15688 main.go:141] libmachine: Creating machine...
	I0603 10:39:01.302557   15688 main.go:141] libmachine: (addons-926744) Calling .Create
	I0603 10:39:01.302708   15688 main.go:141] libmachine: (addons-926744) Creating KVM machine...
	I0603 10:39:01.303852   15688 main.go:141] libmachine: (addons-926744) DBG | found existing default KVM network
	I0603 10:39:01.304537   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.304381   15710 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0603 10:39:01.304564   15688 main.go:141] libmachine: (addons-926744) DBG | created network xml: 
	I0603 10:39:01.304585   15688 main.go:141] libmachine: (addons-926744) DBG | <network>
	I0603 10:39:01.304594   15688 main.go:141] libmachine: (addons-926744) DBG |   <name>mk-addons-926744</name>
	I0603 10:39:01.304607   15688 main.go:141] libmachine: (addons-926744) DBG |   <dns enable='no'/>
	I0603 10:39:01.304617   15688 main.go:141] libmachine: (addons-926744) DBG |   
	I0603 10:39:01.304628   15688 main.go:141] libmachine: (addons-926744) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 10:39:01.304639   15688 main.go:141] libmachine: (addons-926744) DBG |     <dhcp>
	I0603 10:39:01.304691   15688 main.go:141] libmachine: (addons-926744) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 10:39:01.304717   15688 main.go:141] libmachine: (addons-926744) DBG |     </dhcp>
	I0603 10:39:01.304732   15688 main.go:141] libmachine: (addons-926744) DBG |   </ip>
	I0603 10:39:01.304747   15688 main.go:141] libmachine: (addons-926744) DBG |   
	I0603 10:39:01.304773   15688 main.go:141] libmachine: (addons-926744) DBG | </network>
	I0603 10:39:01.304795   15688 main.go:141] libmachine: (addons-926744) DBG | 
	I0603 10:39:01.309683   15688 main.go:141] libmachine: (addons-926744) DBG | trying to create private KVM network mk-addons-926744 192.168.39.0/24...
	I0603 10:39:01.370727   15688 main.go:141] libmachine: (addons-926744) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 ...
	I0603 10:39:01.370755   15688 main.go:141] libmachine: (addons-926744) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:39:01.370766   15688 main.go:141] libmachine: (addons-926744) DBG | private KVM network mk-addons-926744 192.168.39.0/24 created
	I0603 10:39:01.370784   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.370668   15710 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:01.370926   15688 main.go:141] libmachine: (addons-926744) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:39:01.615063   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.614922   15710 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa...
	I0603 10:39:01.689453   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.689334   15710 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/addons-926744.rawdisk...
	I0603 10:39:01.689484   15688 main.go:141] libmachine: (addons-926744) DBG | Writing magic tar header
	I0603 10:39:01.689522   15688 main.go:141] libmachine: (addons-926744) DBG | Writing SSH key tar header
	I0603 10:39:01.689544   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.689462   15710 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 ...
	I0603 10:39:01.689569   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 (perms=drwx------)
	I0603 10:39:01.689578   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744
	I0603 10:39:01.689585   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:39:01.689592   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:39:01.689602   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:01.689607   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:39:01.689616   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:39:01.689620   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:39:01.689632   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:39:01.689641   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home
	I0603 10:39:01.689654   15688 main.go:141] libmachine: (addons-926744) DBG | Skipping /home - not owner
	I0603 10:39:01.689666   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:39:01.689674   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:39:01.689679   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:39:01.689686   15688 main.go:141] libmachine: (addons-926744) Creating domain...
	I0603 10:39:01.690709   15688 main.go:141] libmachine: (addons-926744) define libvirt domain using xml: 
	I0603 10:39:01.690723   15688 main.go:141] libmachine: (addons-926744) <domain type='kvm'>
	I0603 10:39:01.690729   15688 main.go:141] libmachine: (addons-926744)   <name>addons-926744</name>
	I0603 10:39:01.690735   15688 main.go:141] libmachine: (addons-926744)   <memory unit='MiB'>4000</memory>
	I0603 10:39:01.690745   15688 main.go:141] libmachine: (addons-926744)   <vcpu>2</vcpu>
	I0603 10:39:01.690756   15688 main.go:141] libmachine: (addons-926744)   <features>
	I0603 10:39:01.690769   15688 main.go:141] libmachine: (addons-926744)     <acpi/>
	I0603 10:39:01.690779   15688 main.go:141] libmachine: (addons-926744)     <apic/>
	I0603 10:39:01.690790   15688 main.go:141] libmachine: (addons-926744)     <pae/>
	I0603 10:39:01.690800   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.690812   15688 main.go:141] libmachine: (addons-926744)   </features>
	I0603 10:39:01.690823   15688 main.go:141] libmachine: (addons-926744)   <cpu mode='host-passthrough'>
	I0603 10:39:01.690847   15688 main.go:141] libmachine: (addons-926744)   
	I0603 10:39:01.690873   15688 main.go:141] libmachine: (addons-926744)   </cpu>
	I0603 10:39:01.690886   15688 main.go:141] libmachine: (addons-926744)   <os>
	I0603 10:39:01.690895   15688 main.go:141] libmachine: (addons-926744)     <type>hvm</type>
	I0603 10:39:01.690906   15688 main.go:141] libmachine: (addons-926744)     <boot dev='cdrom'/>
	I0603 10:39:01.690916   15688 main.go:141] libmachine: (addons-926744)     <boot dev='hd'/>
	I0603 10:39:01.690929   15688 main.go:141] libmachine: (addons-926744)     <bootmenu enable='no'/>
	I0603 10:39:01.690939   15688 main.go:141] libmachine: (addons-926744)   </os>
	I0603 10:39:01.690947   15688 main.go:141] libmachine: (addons-926744)   <devices>
	I0603 10:39:01.690955   15688 main.go:141] libmachine: (addons-926744)     <disk type='file' device='cdrom'>
	I0603 10:39:01.690968   15688 main.go:141] libmachine: (addons-926744)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/boot2docker.iso'/>
	I0603 10:39:01.690980   15688 main.go:141] libmachine: (addons-926744)       <target dev='hdc' bus='scsi'/>
	I0603 10:39:01.690991   15688 main.go:141] libmachine: (addons-926744)       <readonly/>
	I0603 10:39:01.691001   15688 main.go:141] libmachine: (addons-926744)     </disk>
	I0603 10:39:01.691027   15688 main.go:141] libmachine: (addons-926744)     <disk type='file' device='disk'>
	I0603 10:39:01.691068   15688 main.go:141] libmachine: (addons-926744)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:39:01.691088   15688 main.go:141] libmachine: (addons-926744)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/addons-926744.rawdisk'/>
	I0603 10:39:01.691105   15688 main.go:141] libmachine: (addons-926744)       <target dev='hda' bus='virtio'/>
	I0603 10:39:01.691119   15688 main.go:141] libmachine: (addons-926744)     </disk>
	I0603 10:39:01.691131   15688 main.go:141] libmachine: (addons-926744)     <interface type='network'>
	I0603 10:39:01.691145   15688 main.go:141] libmachine: (addons-926744)       <source network='mk-addons-926744'/>
	I0603 10:39:01.691156   15688 main.go:141] libmachine: (addons-926744)       <model type='virtio'/>
	I0603 10:39:01.691167   15688 main.go:141] libmachine: (addons-926744)     </interface>
	I0603 10:39:01.691178   15688 main.go:141] libmachine: (addons-926744)     <interface type='network'>
	I0603 10:39:01.691198   15688 main.go:141] libmachine: (addons-926744)       <source network='default'/>
	I0603 10:39:01.691213   15688 main.go:141] libmachine: (addons-926744)       <model type='virtio'/>
	I0603 10:39:01.691221   15688 main.go:141] libmachine: (addons-926744)     </interface>
	I0603 10:39:01.691226   15688 main.go:141] libmachine: (addons-926744)     <serial type='pty'>
	I0603 10:39:01.691234   15688 main.go:141] libmachine: (addons-926744)       <target port='0'/>
	I0603 10:39:01.691245   15688 main.go:141] libmachine: (addons-926744)     </serial>
	I0603 10:39:01.691255   15688 main.go:141] libmachine: (addons-926744)     <console type='pty'>
	I0603 10:39:01.691266   15688 main.go:141] libmachine: (addons-926744)       <target type='serial' port='0'/>
	I0603 10:39:01.691278   15688 main.go:141] libmachine: (addons-926744)     </console>
	I0603 10:39:01.691288   15688 main.go:141] libmachine: (addons-926744)     <rng model='virtio'>
	I0603 10:39:01.691302   15688 main.go:141] libmachine: (addons-926744)       <backend model='random'>/dev/random</backend>
	I0603 10:39:01.691315   15688 main.go:141] libmachine: (addons-926744)     </rng>
	I0603 10:39:01.691332   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.691348   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.691362   15688 main.go:141] libmachine: (addons-926744)   </devices>
	I0603 10:39:01.691373   15688 main.go:141] libmachine: (addons-926744) </domain>
	I0603 10:39:01.691387   15688 main.go:141] libmachine: (addons-926744) 
	I0603 10:39:01.696881   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:7a:35:b1 in network default
	I0603 10:39:01.697393   15688 main.go:141] libmachine: (addons-926744) Ensuring networks are active...
	I0603 10:39:01.697413   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:01.697999   15688 main.go:141] libmachine: (addons-926744) Ensuring network default is active
	I0603 10:39:01.698314   15688 main.go:141] libmachine: (addons-926744) Ensuring network mk-addons-926744 is active
	I0603 10:39:01.698718   15688 main.go:141] libmachine: (addons-926744) Getting domain xml...
	I0603 10:39:01.699324   15688 main.go:141] libmachine: (addons-926744) Creating domain...
	I0603 10:39:03.047775   15688 main.go:141] libmachine: (addons-926744) Waiting to get IP...
	I0603 10:39:03.048591   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.048982   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.049012   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.048933   15710 retry.go:31] will retry after 234.406372ms: waiting for machine to come up
	I0603 10:39:03.285437   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.285802   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.285830   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.285760   15710 retry.go:31] will retry after 368.775764ms: waiting for machine to come up
	I0603 10:39:03.656294   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.656800   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.656831   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.656749   15710 retry.go:31] will retry after 327.819161ms: waiting for machine to come up
	I0603 10:39:03.986447   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.986867   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.986904   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.986850   15710 retry.go:31] will retry after 516.803871ms: waiting for machine to come up
	I0603 10:39:04.505163   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:04.505606   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:04.505644   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:04.505577   15710 retry.go:31] will retry after 538.847196ms: waiting for machine to come up
	I0603 10:39:05.046513   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:05.046959   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:05.046978   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:05.046922   15710 retry.go:31] will retry after 794.327963ms: waiting for machine to come up
	I0603 10:39:05.842621   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:05.843055   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:05.843226   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:05.843016   15710 retry.go:31] will retry after 789.369654ms: waiting for machine to come up
	I0603 10:39:06.634041   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:06.634422   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:06.634449   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:06.634390   15710 retry.go:31] will retry after 1.140360619s: waiting for machine to come up
	I0603 10:39:07.776668   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:07.777069   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:07.777100   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:07.777000   15710 retry.go:31] will retry after 1.192415957s: waiting for machine to come up
	I0603 10:39:08.971405   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:08.971747   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:08.971780   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:08.971725   15710 retry.go:31] will retry after 2.110243957s: waiting for machine to come up
	I0603 10:39:11.083591   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:11.083990   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:11.084020   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:11.083958   15710 retry.go:31] will retry after 2.197882657s: waiting for machine to come up
	I0603 10:39:13.284444   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:13.284919   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:13.284947   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:13.284869   15710 retry.go:31] will retry after 3.328032381s: waiting for machine to come up
	I0603 10:39:16.614700   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:16.615094   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:16.615116   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:16.615075   15710 retry.go:31] will retry after 4.426262831s: waiting for machine to come up
	I0603 10:39:21.042222   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.042761   15688 main.go:141] libmachine: (addons-926744) Found IP for machine: 192.168.39.188
	I0603 10:39:21.042779   15688 main.go:141] libmachine: (addons-926744) Reserving static IP address...
	I0603 10:39:21.042788   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has current primary IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.043326   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find host DHCP lease matching {name: "addons-926744", mac: "52:54:00:ef:0f:40", ip: "192.168.39.188"} in network mk-addons-926744
	I0603 10:39:21.109987   15688 main.go:141] libmachine: (addons-926744) DBG | Getting to WaitForSSH function...
	I0603 10:39:21.110019   15688 main.go:141] libmachine: (addons-926744) Reserved static IP address: 192.168.39.188
	I0603 10:39:21.110043   15688 main.go:141] libmachine: (addons-926744) Waiting for SSH to be available...
	I0603 10:39:21.112366   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.112809   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.112844   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.113103   15688 main.go:141] libmachine: (addons-926744) DBG | Using SSH client type: external
	I0603 10:39:21.113126   15688 main.go:141] libmachine: (addons-926744) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa (-rw-------)
	I0603 10:39:21.113154   15688 main.go:141] libmachine: (addons-926744) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:39:21.113171   15688 main.go:141] libmachine: (addons-926744) DBG | About to run SSH command:
	I0603 10:39:21.113200   15688 main.go:141] libmachine: (addons-926744) DBG | exit 0
	I0603 10:39:21.242959   15688 main.go:141] libmachine: (addons-926744) DBG | SSH cmd err, output: <nil>: 
	I0603 10:39:21.243259   15688 main.go:141] libmachine: (addons-926744) KVM machine creation complete!
	I0603 10:39:21.243542   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:21.244049   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:21.244263   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:21.244427   15688 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:39:21.244438   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:21.245663   15688 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:39:21.245674   15688 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:39:21.245680   15688 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:39:21.245688   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.247729   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.248018   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.248048   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.248218   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.248379   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.248536   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.248654   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.248809   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.249030   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.249046   15688 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:39:21.354004   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:39:21.354031   15688 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:39:21.354041   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.356660   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.357019   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.357038   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.357190   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.357377   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.357519   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.357678   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.357821   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.357982   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.357998   15688 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:39:21.467553   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:39:21.467624   15688 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:39:21.467634   15688 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:39:21.467644   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.467884   15688 buildroot.go:166] provisioning hostname "addons-926744"
	I0603 10:39:21.467914   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.468067   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.470868   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.471261   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.471289   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.471392   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.471547   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.471689   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.471828   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.472139   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.472301   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.472313   15688 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-926744 && echo "addons-926744" | sudo tee /etc/hostname
	I0603 10:39:21.599198   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926744
	
	I0603 10:39:21.599239   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.601905   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.602278   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.602298   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.602506   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.602710   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.602879   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.603048   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.603212   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.603445   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.603470   15688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-926744' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-926744/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-926744' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:39:21.725573   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:39:21.725597   15688 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:39:21.725637   15688 buildroot.go:174] setting up certificates
	I0603 10:39:21.725654   15688 provision.go:84] configureAuth start
	I0603 10:39:21.725672   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.725914   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:21.728329   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.728687   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.728716   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.728800   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.730953   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.731239   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.731268   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.731370   15688 provision.go:143] copyHostCerts
	I0603 10:39:21.731449   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:39:21.731560   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:39:21.731636   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:39:21.731695   15688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.addons-926744 san=[127.0.0.1 192.168.39.188 addons-926744 localhost minikube]
	I0603 10:39:22.097550   15688 provision.go:177] copyRemoteCerts
	I0603 10:39:22.097612   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:39:22.097643   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.101431   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.101796   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.101825   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.101952   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.102210   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.102350   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.102549   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.187551   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:39:22.210910   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:39:22.233749   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 10:39:22.256094   15688 provision.go:87] duration metric: took 530.42487ms to configureAuth
	I0603 10:39:22.256116   15688 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:39:22.256278   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:22.256344   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.259055   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.259485   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.259513   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.259672   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.259874   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.260041   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.260241   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.260422   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:22.260595   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:22.260611   15688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:39:22.523377   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:39:22.523406   15688 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:39:22.523416   15688 main.go:141] libmachine: (addons-926744) Calling .GetURL
	I0603 10:39:22.524650   15688 main.go:141] libmachine: (addons-926744) DBG | Using libvirt version 6000000
	I0603 10:39:22.527101   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.527501   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.527523   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.527705   15688 main.go:141] libmachine: Docker is up and running!
	I0603 10:39:22.527719   15688 main.go:141] libmachine: Reticulating splines...
	I0603 10:39:22.527725   15688 client.go:171] duration metric: took 21.767140775s to LocalClient.Create
	I0603 10:39:22.527743   15688 start.go:167] duration metric: took 21.767190617s to libmachine.API.Create "addons-926744"
	I0603 10:39:22.527753   15688 start.go:293] postStartSetup for "addons-926744" (driver="kvm2")
	I0603 10:39:22.527761   15688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:39:22.527776   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.527996   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:39:22.528020   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.530310   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.530683   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.530702   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.530829   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.531004   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.531190   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.531336   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.612774   15688 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:39:22.616746   15688 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:39:22.616768   15688 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:39:22.616830   15688 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:39:22.616864   15688 start.go:296] duration metric: took 89.105826ms for postStartSetup
	I0603 10:39:22.616902   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:22.617395   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:22.620127   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.620475   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.620504   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.620740   15688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json ...
	I0603 10:39:22.620893   15688 start.go:128] duration metric: took 21.877040801s to createHost
	I0603 10:39:22.620914   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.622879   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.623185   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.623214   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.623315   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.623489   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.623632   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.623749   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.623881   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:22.624088   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:22.624103   15688 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:39:22.735554   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717411162.708587879
	
	I0603 10:39:22.735574   15688 fix.go:216] guest clock: 1717411162.708587879
	I0603 10:39:22.735581   15688 fix.go:229] Guest: 2024-06-03 10:39:22.708587879 +0000 UTC Remote: 2024-06-03 10:39:22.620903621 +0000 UTC m=+21.971514084 (delta=87.684258ms)
	I0603 10:39:22.735612   15688 fix.go:200] guest clock delta is within tolerance: 87.684258ms
	I0603 10:39:22.735617   15688 start.go:83] releasing machines lock for "addons-926744", held for 21.991830654s
	I0603 10:39:22.735640   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.735892   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:22.738244   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.738492   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.738519   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.738657   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739074   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739222   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739337   15688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:39:22.739389   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.739447   15688 ssh_runner.go:195] Run: cat /version.json
	I0603 10:39:22.739469   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.741962   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742092   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742332   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.742356   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742492   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.742598   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.742623   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742667   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.742817   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.742826   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.742988   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.742996   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.743120   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.743234   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.849068   15688 ssh_runner.go:195] Run: systemctl --version
	I0603 10:39:22.855018   15688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:39:23.012170   15688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:39:23.018709   15688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:39:23.018758   15688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:39:23.034253   15688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:39:23.034271   15688 start.go:494] detecting cgroup driver to use...
	I0603 10:39:23.034321   15688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:39:23.050406   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:39:23.062935   15688 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:39:23.062972   15688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:39:23.075521   15688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:39:23.088043   15688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:39:23.197029   15688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:39:23.344391   15688 docker.go:233] disabling docker service ...
	I0603 10:39:23.344453   15688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:39:23.359448   15688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:39:23.371431   15688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:39:23.510818   15688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:39:23.635139   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:39:23.648933   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:39:23.666622   15688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:39:23.666672   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.676952   15688 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:39:23.676996   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.686714   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.696483   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.706367   15688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:39:23.716174   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.726218   15688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.742841   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.752762   15688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:39:23.761523   15688 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:39:23.761566   15688 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:39:23.774573   15688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:39:23.783462   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:23.899169   15688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:39:24.033431   15688 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:39:24.033510   15688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:39:24.038630   15688 start.go:562] Will wait 60s for crictl version
	I0603 10:39:24.038688   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:39:24.042629   15688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:39:24.083375   15688 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:39:24.083470   15688 ssh_runner.go:195] Run: crio --version
	I0603 10:39:24.111320   15688 ssh_runner.go:195] Run: crio --version
	I0603 10:39:24.141167   15688 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:39:24.142262   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:24.144907   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:24.145228   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:24.145256   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:24.145431   15688 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:39:24.149666   15688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:39:24.162397   15688 kubeadm.go:877] updating cluster {Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 10:39:24.162528   15688 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:39:24.162578   15688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:39:24.195782   15688 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 10:39:24.195851   15688 ssh_runner.go:195] Run: which lz4
	I0603 10:39:24.199796   15688 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 10:39:24.203991   15688 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 10:39:24.204014   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 10:39:25.473804   15688 crio.go:462] duration metric: took 1.274045772s to copy over tarball
	I0603 10:39:25.473876   15688 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 10:39:27.732622   15688 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258712261s)
	I0603 10:39:27.732659   15688 crio.go:469] duration metric: took 2.258825539s to extract the tarball
	I0603 10:39:27.732670   15688 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 10:39:27.770169   15688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:39:27.818097   15688 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 10:39:27.818123   15688 cache_images.go:84] Images are preloaded, skipping loading
	I0603 10:39:27.818133   15688 kubeadm.go:928] updating node { 192.168.39.188 8443 v1.30.1 crio true true} ...
	I0603 10:39:27.818241   15688 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-926744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:39:27.818315   15688 ssh_runner.go:195] Run: crio config
	I0603 10:39:27.859809   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:27.859828   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:27.859837   15688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 10:39:27.859858   15688 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-926744 NodeName:addons-926744 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 10:39:27.859975   15688 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-926744"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 10:39:27.860029   15688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:39:27.870310   15688 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 10:39:27.870387   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 10:39:27.879945   15688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 10:39:27.895930   15688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:39:27.911575   15688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 10:39:27.927182   15688 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I0603 10:39:27.930892   15688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:39:27.942855   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:28.049910   15688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:39:28.066146   15688 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744 for IP: 192.168.39.188
	I0603 10:39:28.066167   15688 certs.go:194] generating shared ca certs ...
	I0603 10:39:28.066179   15688 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.066327   15688 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:39:28.307328   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt ...
	I0603 10:39:28.307353   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt: {Name:mk984ed7a059f1be0c7e39f38d2e6183de9bbdff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.307510   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key ...
	I0603 10:39:28.307520   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key: {Name:mk82c7e7b22a8dabc509ee5632c503ace457f1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.307594   15688 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:39:28.423209   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt ...
	I0603 10:39:28.423237   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt: {Name:mka881b38c9e88d6c084321a1bfb3b4e4074f25f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.423393   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key ...
	I0603 10:39:28.423405   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key: {Name:mk349452fbb1bf63c9303e0d2bae66707b31ec88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.423470   15688 certs.go:256] generating profile certs ...
	I0603 10:39:28.423517   15688 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key
	I0603 10:39:28.423531   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt with IP's: []
	I0603 10:39:28.686409   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt ...
	I0603 10:39:28.686435   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: {Name:mke7d608cc02f6475b5fad9c4d3da0b5cbfee0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.686576   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key ...
	I0603 10:39:28.686586   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key: {Name:mk5be8a532a5d7bb239b3a45c6c370a2517cd8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.686647   15688 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57
	I0603 10:39:28.686663   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.188]
	I0603 10:39:28.892305   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 ...
	I0603 10:39:28.892337   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57: {Name:mk23ceeaaa209592cdc8986d5b781decf2eb3719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.892523   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57 ...
	I0603 10:39:28.892541   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57: {Name:mkf7feeae242468b71e875a4f34e0d9e741c0102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.892637   15688 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt
	I0603 10:39:28.892727   15688 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key
	I0603 10:39:28.892777   15688 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key
	I0603 10:39:28.892792   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt with IP's: []
	I0603 10:39:28.991268   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt ...
	I0603 10:39:28.991295   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt: {Name:mk91b1b04c07e0abd5edeb22741cb687164322a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.991465   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key ...
	I0603 10:39:28.991477   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key: {Name:mk35e81d34e79efaa7ae4abefa0f7bbf60b8ccf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.991675   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:39:28.991707   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:39:28.991730   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:39:28.991759   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:39:28.992270   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:39:29.040169   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:39:29.067982   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:39:29.090567   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:39:29.113082   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 10:39:29.135913   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 10:39:29.158512   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:39:29.181346   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 10:39:29.203449   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:39:29.225764   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 10:39:29.241671   15688 ssh_runner.go:195] Run: openssl version
	I0603 10:39:29.247206   15688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:39:29.257858   15688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.262385   15688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.262434   15688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.268350   15688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:39:29.279498   15688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:39:29.283652   15688 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:39:29.283707   15688 kubeadm.go:391] StartCluster: {Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:39:29.283785   15688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 10:39:29.283823   15688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 10:39:29.318028   15688 cri.go:89] found id: ""
	I0603 10:39:29.318101   15688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 10:39:29.328068   15688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 10:39:29.337625   15688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 10:39:29.347197   15688 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 10:39:29.347215   15688 kubeadm.go:156] found existing configuration files:
	
	I0603 10:39:29.347247   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 10:39:29.356339   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 10:39:29.356378   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 10:39:29.365771   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 10:39:29.374519   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 10:39:29.374566   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 10:39:29.383968   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 10:39:29.393211   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 10:39:29.393253   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 10:39:29.402717   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 10:39:29.411721   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 10:39:29.411754   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 10:39:29.421177   15688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 10:39:29.476425   15688 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 10:39:29.476503   15688 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 10:39:29.597177   15688 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 10:39:29.597267   15688 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 10:39:29.597417   15688 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 10:39:29.806890   15688 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 10:39:30.066194   15688 out.go:204]   - Generating certificates and keys ...
	I0603 10:39:30.066354   15688 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 10:39:30.066445   15688 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 10:39:30.066546   15688 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 10:39:30.255981   15688 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 10:39:30.525682   15688 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 10:39:30.648978   15688 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 10:39:31.113794   15688 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 10:39:31.113969   15688 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-926744 localhost] and IPs [192.168.39.188 127.0.0.1 ::1]
	I0603 10:39:31.502754   15688 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 10:39:31.502942   15688 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-926744 localhost] and IPs [192.168.39.188 127.0.0.1 ::1]
	I0603 10:39:31.743899   15688 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 10:39:32.091205   15688 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 10:39:32.449506   15688 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 10:39:32.449754   15688 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 10:39:32.606839   15688 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 10:39:32.728286   15688 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 10:39:32.875918   15688 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 10:39:33.068539   15688 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 10:39:33.126307   15688 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 10:39:33.126978   15688 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 10:39:33.129307   15688 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 10:39:33.130943   15688 out.go:204]   - Booting up control plane ...
	I0603 10:39:33.131074   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 10:39:33.131194   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 10:39:33.132324   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 10:39:33.150895   15688 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 10:39:33.151824   15688 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 10:39:33.151953   15688 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 10:39:33.276217   15688 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 10:39:33.276320   15688 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 10:39:33.777537   15688 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.486512ms
	I0603 10:39:33.777645   15688 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 10:39:38.776396   15688 kubeadm.go:309] [api-check] The API server is healthy after 5.002184862s
	I0603 10:39:38.789344   15688 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 10:39:38.802566   15688 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 10:39:38.826419   15688 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 10:39:38.826699   15688 kubeadm.go:309] [mark-control-plane] Marking the node addons-926744 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 10:39:38.836893   15688 kubeadm.go:309] [bootstrap-token] Using token: 9hbrg0.lmmhr5ylciaequvw
	I0603 10:39:38.838140   15688 out.go:204]   - Configuring RBAC rules ...
	I0603 10:39:38.838229   15688 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 10:39:38.841539   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 10:39:38.850513   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 10:39:38.853597   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 10:39:38.856557   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 10:39:38.859328   15688 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 10:39:39.185416   15688 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 10:39:39.617733   15688 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 10:39:40.185892   15688 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 10:39:40.185931   15688 kubeadm.go:309] 
	I0603 10:39:40.186031   15688 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 10:39:40.186046   15688 kubeadm.go:309] 
	I0603 10:39:40.186262   15688 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 10:39:40.186279   15688 kubeadm.go:309] 
	I0603 10:39:40.186311   15688 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 10:39:40.186360   15688 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 10:39:40.186416   15688 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 10:39:40.186425   15688 kubeadm.go:309] 
	I0603 10:39:40.186502   15688 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 10:39:40.186511   15688 kubeadm.go:309] 
	I0603 10:39:40.186549   15688 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 10:39:40.186567   15688 kubeadm.go:309] 
	I0603 10:39:40.186648   15688 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 10:39:40.186745   15688 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 10:39:40.186844   15688 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 10:39:40.186855   15688 kubeadm.go:309] 
	I0603 10:39:40.186981   15688 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 10:39:40.187110   15688 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 10:39:40.187129   15688 kubeadm.go:309] 
	I0603 10:39:40.187243   15688 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9hbrg0.lmmhr5ylciaequvw \
	I0603 10:39:40.187381   15688 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 10:39:40.187403   15688 kubeadm.go:309] 	--control-plane 
	I0603 10:39:40.187407   15688 kubeadm.go:309] 
	I0603 10:39:40.187487   15688 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 10:39:40.187494   15688 kubeadm.go:309] 
	I0603 10:39:40.187590   15688 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9hbrg0.lmmhr5ylciaequvw \
	I0603 10:39:40.187731   15688 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 10:39:40.188165   15688 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 10:39:40.188195   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:40.188206   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:40.189790   15688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 10:39:40.190941   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 10:39:40.201559   15688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 10:39:40.221034   15688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 10:39:40.221129   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:40.221136   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-926744 minikube.k8s.io/updated_at=2024_06_03T10_39_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=addons-926744 minikube.k8s.io/primary=true
	I0603 10:39:40.360045   15688 ops.go:34] apiserver oom_adj: -16
	I0603 10:39:40.360133   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:40.860813   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:41.360423   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:41.860916   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:42.360194   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:42.860438   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:43.360500   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:43.860260   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:44.360675   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:44.860710   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:45.360191   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:45.860896   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:46.361031   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:46.860193   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:47.360946   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:47.860306   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:48.360590   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:48.860941   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:49.360407   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:49.860893   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:50.360464   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:50.860825   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:51.360932   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:51.861021   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.360313   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.860689   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.945051   15688 kubeadm.go:1107] duration metric: took 12.723998121s to wait for elevateKubeSystemPrivileges
	W0603 10:39:52.945090   15688 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 10:39:52.945097   15688 kubeadm.go:393] duration metric: took 23.661395353s to StartCluster
	I0603 10:39:52.945113   15688 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:52.945246   15688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:39:52.945592   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:52.945785   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 10:39:52.945808   15688 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:39:52.947615   15688 out.go:177] * Verifying Kubernetes components...
	I0603 10:39:52.945867   15688 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 10:39:52.946064   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:52.948920   15688 addons.go:69] Setting cloud-spanner=true in profile "addons-926744"
	I0603 10:39:52.948934   15688 addons.go:69] Setting helm-tiller=true in profile "addons-926744"
	I0603 10:39:52.948939   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:52.948948   15688 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-926744"
	I0603 10:39:52.948953   15688 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-926744"
	I0603 10:39:52.948965   15688 addons.go:234] Setting addon helm-tiller=true in "addons-926744"
	I0603 10:39:52.948976   15688 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-926744"
	I0603 10:39:52.948982   15688 addons.go:69] Setting registry=true in profile "addons-926744"
	I0603 10:39:52.948977   15688 addons.go:69] Setting default-storageclass=true in profile "addons-926744"
	I0603 10:39:52.948934   15688 addons.go:69] Setting gcp-auth=true in profile "addons-926744"
	I0603 10:39:52.949005   15688 addons.go:69] Setting ingress=true in profile "addons-926744"
	I0603 10:39:52.949010   15688 addons.go:69] Setting storage-provisioner=true in profile "addons-926744"
	I0603 10:39:52.949012   15688 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-926744"
	I0603 10:39:52.949012   15688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-926744"
	I0603 10:39:52.949021   15688 addons.go:69] Setting inspektor-gadget=true in profile "addons-926744"
	I0603 10:39:52.949074   15688 addons.go:234] Setting addon inspektor-gadget=true in "addons-926744"
	I0603 10:39:52.949097   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949011   15688 addons.go:69] Setting volumesnapshots=true in profile "addons-926744"
	I0603 10:39:52.949172   15688 addons.go:234] Setting addon volumesnapshots=true in "addons-926744"
	I0603 10:39:52.949193   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.948966   15688 addons.go:234] Setting addon cloud-spanner=true in "addons-926744"
	I0603 10:39:52.949297   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949006   15688 addons.go:69] Setting volcano=true in profile "addons-926744"
	I0603 10:39:52.949379   15688 addons.go:234] Setting addon volcano=true in "addons-926744"
	I0603 10:39:52.949419   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949022   15688 addons.go:234] Setting addon ingress=true in "addons-926744"
	I0603 10:39:52.949470   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949476   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949507   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949527   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949534   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949548   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949565   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.948973   15688 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-926744"
	I0603 10:39:52.949731   15688 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-926744"
	I0603 10:39:52.949806   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949838   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949852   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949877   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949925   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.948928   15688 addons.go:69] Setting yakd=true in profile "addons-926744"
	I0603 10:39:52.949954   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949980   15688 addons.go:234] Setting addon yakd=true in "addons-926744"
	I0603 10:39:52.948999   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.948999   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949005   15688 addons.go:234] Setting addon registry=true in "addons-926744"
	I0603 10:39:52.950058   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950091   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950124   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949027   15688 addons.go:69] Setting metrics-server=true in profile "addons-926744"
	I0603 10:39:52.950351   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950368   15688 addons.go:234] Setting addon metrics-server=true in "addons-926744"
	I0603 10:39:52.950372   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949034   15688 addons.go:69] Setting ingress-dns=true in profile "addons-926744"
	I0603 10:39:52.950392   15688 addons.go:234] Setting addon ingress-dns=true in "addons-926744"
	I0603 10:39:52.949030   15688 mustload.go:65] Loading cluster: addons-926744
	I0603 10:39:52.949044   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950508   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950583   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:52.950609   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950628   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950672   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950706   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950721   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949036   15688 addons.go:234] Setting addon storage-provisioner=true in "addons-926744"
	I0603 10:39:52.950819   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950840   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950884   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950912   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950929   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950990   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.951033   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.951161   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.970563   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0603 10:39:52.970639   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0603 10:39:52.970999   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.971114   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.971640   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.971662   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.971790   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.971805   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.972098   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.972153   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.972372   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:52.972438   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0603 10:39:52.972745   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.972764   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.972778   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.973208   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.973228   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.973529   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.974098   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.974138   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.974366   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39493
	I0603 10:39:52.976847   15688 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-926744"
	I0603 10:39:52.976890   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.977273   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.977300   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.979512   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.979547   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.979807   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.979824   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.980430   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.980463   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.981875   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0603 10:39:52.982330   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.983218   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.983241   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.983669   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.984025   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.984510   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.984534   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.985046   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.985085   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.992533   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.993131   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.993169   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.993422   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0603 10:39:52.993863   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.995565   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0603 10:39:52.995773   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.995785   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.996166   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.996645   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.996662   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.997009   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.997221   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:52.998575   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.999596   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.999644   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.001119   15688 addons.go:234] Setting addon default-storageclass=true in "addons-926744"
	I0603 10:39:53.001160   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:53.001516   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.001547   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.001717   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0603 10:39:53.002104   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.002628   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.002646   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.003080   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.003641   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.003675   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.008851   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0603 10:39:53.009301   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.009828   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.009845   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.010204   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.010400   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.012150   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.014624   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 10:39:53.016068   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 10:39:53.016085   15688 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 10:39:53.016116   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.020339   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.020725   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.021026   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.020970   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.021205   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.021342   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.021511   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.021837   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0603 10:39:53.022185   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.022796   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.022814   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.023291   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.023918   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.023958   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.024205   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0603 10:39:53.024652   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.025198   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.025217   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.025282   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I0603 10:39:53.025616   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.026246   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.026288   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.030426   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.031073   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.031102   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.032416   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0603 10:39:53.032850   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.032924   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0603 10:39:53.033393   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.033410   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.033522   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.033875   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.033994   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0603 10:39:53.034019   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.034069   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.034614   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.034654   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.034861   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.035021   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.035056   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.035444   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.035464   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.035652   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.036124   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.036166   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.036198   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.037213   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.039232   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:39:53.038052   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.038082   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0603 10:39:53.039729   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0603 10:39:53.041948   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 10:39:53.040652   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.040000   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0603 10:39:53.039794   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0603 10:39:53.041067   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.044452   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:39:53.045836   15688 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 10:39:53.045853   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 10:39:53.045870   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.043775   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.043846   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.044738   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.045984   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.046671   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.046689   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.047093   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.047637   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.047673   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.047950   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.047964   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.048026   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.048091   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.048530   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.048546   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.048607   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.048645   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.048825   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.049308   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.049350   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.049553   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.049575   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.049601   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.049991   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.050169   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.050356   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.050564   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.051172   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.052852   15688 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0603 10:39:53.053974   15688 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 10:39:53.053990   15688 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 10:39:53.054008   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.052746   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.052965   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0603 10:39:53.055290   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.055792   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.055807   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.056165   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.056334   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.056927   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0603 10:39:53.057072   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0603 10:39:53.057362   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:53.057724   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.057756   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.057844   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.058240   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.058696   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.058845   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.058856   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.058910   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.058927   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.060316   15688 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 10:39:53.059222   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.059270   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.059313   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.062075   15688 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 10:39:53.062093   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 10:39:53.062110   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.062161   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.062326   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.062455   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.062469   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.062517   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.063125   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.063692   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I0603 10:39:53.063936   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.064106   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.064515   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.064827   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:39:53.064839   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:39:53.066768   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.066777   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.066829   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:39:53.066851   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:39:53.066861   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:39:53.066876   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:39:53.066887   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:39:53.067269   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:39:53.067304   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:39:53.067313   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 10:39:53.067403   15688 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0603 10:39:53.068979   15688 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 10:39:53.068987   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.070227   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 10:39:53.068473   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.068357   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.070264   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.070291   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.069143   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.070329   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.069875   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0603 10:39:53.070245   15688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 10:39:53.070384   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.070575   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.070719   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.071073   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0603 10:39:53.071500   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.072068   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.072083   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.072462   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.072641   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.073007   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.073589   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.073609   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.073774   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.074030   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.074053   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.074214   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.074373   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.074562   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.074695   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.075425   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.075490   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.077383   15688 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 10:39:53.075843   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.076295   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0603 10:39:53.077059   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0603 10:39:53.078780   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.078799   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 10:39:53.078814   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 10:39:53.078831   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.078894   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0603 10:39:53.079256   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079349   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079408   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079849   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.079866   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.079987   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.080001   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.080315   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.080502   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.080542   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.080630   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.080648   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.080994   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.081177   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.081518   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.081842   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.083148   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.085228   15688 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 10:39:53.084277   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.084619   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.085021   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.085057   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.085742   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.086311   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.086352   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.086367   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.086379   15688 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 10:39:53.086393   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 10:39:53.086406   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.086616   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.087918   15688 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 10:39:53.086954   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.087116   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0603 10:39:53.088326   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0603 10:39:53.089157   15688 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 10:39:53.089169   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 10:39:53.089186   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.089210   15688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 10:39:53.089364   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.089479   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.090576   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.090606   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.090233   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.090258   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.090305   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.090420   15688 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:39:53.090737   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 10:39:53.090759   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.090957   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.091174   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.091193   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.091242   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.091846   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.091943   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.091957   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.092298   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.092499   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.093097   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.093408   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.094481   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.094886   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.094912   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.095141   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.095313   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.095455   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.095662   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.095750   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.097526   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 10:39:53.095863   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.096678   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.096729   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0603 10:39:53.097275   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.098697   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.099128   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.099837   15688 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 10:39:53.100810   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.100798   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 10:39:53.099979   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.101241   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.101942   15688 out.go:177]   - Using image docker.io/busybox:stable
	I0603 10:39:53.103239   15688 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 10:39:53.103257   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 10:39:53.103272   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.101957   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.102144   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.103583   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.104164   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
	I0603 10:39:53.104716   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 10:39:53.104837   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.104983   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.105289   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.106056   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 10:39:53.106377   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.106882   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.107472   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.107514   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 10:39:53.107277   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.107582   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.108447   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.108562   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.109179   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.109186   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 10:39:53.109373   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.109469   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.110486   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 10:39:53.110647   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.111765   15688 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 10:39:53.113194   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 10:39:53.113212   15688 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 10:39:53.113229   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.111944   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.113354   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.113365   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0603 10:39:53.113786   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0603 10:39:53.115002   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 10:39:53.116620   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 10:39:53.116636   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 10:39:53.116653   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.117939   15688 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 10:39:53.116005   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.116055   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.116535   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.117073   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.120316   15688 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 10:39:53.119253   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.119363   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.119726   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.119749   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.120214   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.120468   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.121506   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.121527   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.121534   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.121544   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.121549   15688 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 10:39:53.121560   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.121563   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 10:39:53.121576   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.121608   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.121715   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.121788   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.121831   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.121851   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.121954   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.122181   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.122198   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.122330   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	W0603 10:39:53.123247   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34510->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.123275   15688 retry.go:31] will retry after 321.308159ms: ssh: handshake failed: read tcp 192.168.39.1:34510->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.123863   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.124154   15688 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 10:39:53.124167   15688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 10:39:53.124414   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.125074   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.125433   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.125453   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.125583   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.125767   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.125932   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.126108   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.126814   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.127151   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.127177   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.127248   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.127421   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.127576   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.127707   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	W0603 10:39:53.128663   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34524->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.128685   15688 retry.go:31] will retry after 217.648399ms: ssh: handshake failed: read tcp 192.168.39.1:34524->192.168.39.188:22: read: connection reset by peer
	W0603 10:39:53.128736   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34536->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.128756   15688 retry.go:31] will retry after 284.924422ms: ssh: handshake failed: read tcp 192.168.39.1:34536->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.355149   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 10:39:53.391925   15688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:39:53.391944   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 10:39:53.459021   15688 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 10:39:53.459069   15688 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 10:39:53.484893   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 10:39:53.512706   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 10:39:53.532991   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 10:39:53.533011   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 10:39:53.560989   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:39:53.596754   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 10:39:53.605998   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 10:39:53.606026   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 10:39:53.607123   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 10:39:53.607145   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 10:39:53.621083   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 10:39:53.621099   15688 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 10:39:53.647250   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 10:39:53.720437   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 10:39:53.720460   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 10:39:53.807224   15688 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 10:39:53.807250   15688 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 10:39:53.809821   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 10:39:53.809842   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 10:39:53.835979   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 10:39:53.836004   15688 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 10:39:53.838441   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 10:39:53.838458   15688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 10:39:53.866695   15688 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 10:39:53.866724   15688 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 10:39:53.966602   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 10:39:53.966623   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 10:39:53.986392   15688 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 10:39:53.986411   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 10:39:54.015572   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 10:39:54.056850   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 10:39:54.056877   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 10:39:54.089130   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 10:39:54.095056   15688 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 10:39:54.095081   15688 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 10:39:54.107897   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 10:39:54.107918   15688 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 10:39:54.117686   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 10:39:54.117706   15688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 10:39:54.154954   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 10:39:54.154980   15688 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 10:39:54.180072   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 10:39:54.349928   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 10:39:54.349960   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 10:39:54.352758   15688 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:39:54.352777   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 10:39:54.403949   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 10:39:54.403973   15688 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 10:39:54.406064   15688 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 10:39:54.406084   15688 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 10:39:54.412621   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 10:39:54.559915   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 10:39:54.559944   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 10:39:54.577552   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:39:54.637612   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 10:39:54.637638   15688 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 10:39:54.736770   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 10:39:54.736795   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 10:39:54.747427   15688 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 10:39:54.747442   15688 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 10:39:54.802922   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 10:39:54.802944   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 10:39:54.952521   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 10:39:54.952549   15688 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 10:39:55.012578   15688 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 10:39:55.012602   15688 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 10:39:55.201048   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 10:39:55.213440   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 10:39:55.213460   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 10:39:55.215104   15688 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 10:39:55.215125   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 10:39:55.482880   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 10:39:55.558646   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 10:39:55.558680   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 10:39:55.848622   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 10:39:55.848651   15688 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 10:39:56.205830   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 10:40:00.183700   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 10:40:00.183741   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:40:00.187707   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.188167   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:40:00.188198   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.188458   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:40:00.188691   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:40:00.188879   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:40:00.189074   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:40:00.496234   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 10:40:00.607236   15688 addons.go:234] Setting addon gcp-auth=true in "addons-926744"
	I0603 10:40:00.607284   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:40:00.607584   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:40:00.607614   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:40:00.622715   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I0603 10:40:00.623087   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:40:00.623553   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:40:00.623575   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:40:00.623887   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:40:00.624477   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:40:00.624534   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:40:00.638893   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0603 10:40:00.639307   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:40:00.639775   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:40:00.639799   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:40:00.640135   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:40:00.640361   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:40:00.642093   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:40:00.642293   15688 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 10:40:00.642313   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:40:00.644946   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.645338   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:40:00.645377   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.645556   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:40:00.645735   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:40:00.645919   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:40:00.646060   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:40:01.615832   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.260649484s)
	I0603 10:40:01.615878   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.615892   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.615930   15688 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.223951245s)
	I0603 10:40:01.615986   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.131063292s)
	I0603 10:40:01.616002   15688 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 10:40:01.616021   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616033   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616043   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.103309213s)
	I0603 10:40:01.615945   15688 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.223983684s)
	I0603 10:40:01.616137   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.055128439s)
	I0603 10:40:01.616162   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616173   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616197   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.616245   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616250   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.019471736s)
	I0603 10:40:01.616254   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616262   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616266   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616269   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616275   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616330   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616341   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616344   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.969061716s)
	I0603 10:40:01.616377   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616391   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616351   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616437   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.600842817s)
	I0603 10:40:01.616446   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616457   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616467   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616501   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616509   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616517   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616524   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616535   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.527380307s)
	I0603 10:40:01.616551   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616560   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616627   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436530108s)
	I0603 10:40:01.616641   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616650   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616734   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.204085993s)
	I0603 10:40:01.616748   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616757   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616868   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039282818s)
	W0603 10:40:01.616888   15688 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 10:40:01.616082   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616933   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616950   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.415875301s)
	I0603 10:40:01.616965   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616972   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616987   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.616909   15688 retry.go:31] will retry after 294.17749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 10:40:01.617013   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.617007   15688 node_ready.go:35] waiting up to 6m0s for node "addons-926744" to be "Ready" ...
	I0603 10:40:01.617034   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617041   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617048   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617055   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617060   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.134137683s)
	I0603 10:40:01.617073   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617080   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617094   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.617111   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617118   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617126   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617135   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617201   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617214   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617229   15688 addons.go:475] Verifying addon ingress=true in "addons-926744"
	I0603 10:40:01.621964   15688 out.go:177] * Verifying ingress addon...
	I0603 10:40:01.619967   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.619992   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620031   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620048   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620071   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620087   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620102   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620118   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620132   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620149   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620162   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620174   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620190   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620209   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620225   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620238   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620254   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620270   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620279   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620286   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620524   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620559   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.623612   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623627   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623630   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623635   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623638   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623643   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623646   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623649   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623654   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623657   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623659   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623617   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623664   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623659   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623756   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623766   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623769   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623774   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623778   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623787   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623811   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623823   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623839   15688 node_ready.go:49] node "addons-926744" has status "Ready":"True"
	I0603 10:40:01.623637   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623870   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623649   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623852   15688 node_ready.go:38] duration metric: took 6.828738ms for node "addons-926744" to be "Ready" ...
	I0603 10:40:01.623913   15688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:40:01.624543   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624547   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624564   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624589   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624596   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624599   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624611   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624619   15688 addons.go:475] Verifying addon registry=true in "addons-926744"
	I0603 10:40:01.624633   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624655   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624661   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624662   15688 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 10:40:01.626417   15688 out.go:177] * Verifying registry addon...
	I0603 10:40:01.624704   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624720   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624739   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626148   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626360   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626386   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.627757   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627775   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627794   15688 addons.go:475] Verifying addon metrics-server=true in "addons-926744"
	I0603 10:40:01.627801   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627824   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.629201   15688 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-926744 service yakd-dashboard -n yakd-dashboard
	
	I0603 10:40:01.628687   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 10:40:01.665890   15688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:01.703186   15688 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 10:40:01.703216   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:01.703930   15688 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 10:40:01.703957   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:01.727637   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.727665   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.727945   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.727962   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 10:40:01.728041   15688 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0603 10:40:01.760352   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.760373   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.760748   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.760767   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.911744   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:40:02.124685   15688 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-926744" context rescaled to 1 replicas
	I0603 10:40:02.128168   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:02.135485   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:02.634185   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:02.644010   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.150486   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:03.183085   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.310046   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.104170666s)
	I0603 10:40:03.310123   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:03.310138   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:03.310136   15688 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.667819695s)
	I0603 10:40:03.312144   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:40:03.310431   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:03.310461   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:03.313709   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:03.313729   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:03.313748   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:03.315246   15688 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 10:40:03.314091   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:03.314123   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:03.316651   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 10:40:03.316662   15688 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 10:40:03.316667   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:03.316686   15688 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-926744"
	I0603 10:40:03.318257   15688 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 10:40:03.320349   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 10:40:03.373928   15688 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 10:40:03.373959   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:03.411546   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 10:40:03.411581   15688 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 10:40:03.555366   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 10:40:03.555394   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 10:40:03.620263   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 10:40:03.629888   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:03.641432   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.696242   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:03.862266   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:04.129927   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:04.134763   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:04.328658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:04.601877   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690066487s)
	I0603 10:40:04.601930   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:04.601944   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:04.602320   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:04.602384   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:04.602413   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:04.602430   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:04.602442   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:04.602723   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:04.602739   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:04.602760   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:04.629212   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:04.635368   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:04.828506   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.136297   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:05.148306   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:05.330123   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.709773748s)
	I0603 10:40:05.330171   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:05.330183   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:05.330465   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:05.330480   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:05.330484   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:05.330494   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:05.330503   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:05.331059   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:05.331077   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:05.331081   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:05.332835   15688 addons.go:475] Verifying addon gcp-auth=true in "addons-926744"
	I0603 10:40:05.334688   15688 out.go:177] * Verifying gcp-auth addon...
	I0603 10:40:05.337167   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 10:40:05.363481   15688 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 10:40:05.363509   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:05.374866   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.630097   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:05.635426   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:05.829211   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.841230   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:06.130033   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:06.136346   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:06.180170   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:06.326718   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:06.340674   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:06.628595   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:06.634938   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:06.826372   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:06.840443   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:07.132270   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:07.135483   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:07.326067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:07.341102   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:07.631054   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:07.642137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:07.826543   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:07.844067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:08.128816   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:08.135663   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:08.326284   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:08.340963   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:08.629306   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:08.635590   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:08.671262   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:08.826064   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:08.840446   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:09.145318   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:09.147698   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:09.326715   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:09.340658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:09.631103   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:09.636087   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:09.825527   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:09.840886   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:10.129875   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:10.135472   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:10.326000   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:10.340340   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:10.630564   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:10.635504   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:10.672419   15688 pod_ready.go:97] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:40:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.188 HostIPs:[{IP:192.168.39
.188}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-06-03 10:39:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 10:39:57 +0000 UTC,FinishedAt:2024-06-03 10:40:09 +0000 UTC,ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9 Started:0xc000656fe0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 10:40:10.672457   15688 pod_ready.go:81] duration metric: took 9.006533452s for pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace to be "Ready" ...
	E0603 10:40:10.672472   15688 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:40:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.188 HostIPs:[{IP:192.168.39.188}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-06-03 10:39:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 10:39:57 +0000 UTC,FinishedAt:2024-06-03 10:40:09 +0000 UTC,ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9 Started:0xc000656fe0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 10:40:10.672481   15688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.683729   15688 pod_ready.go:92] pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.684037   15688 pod_ready.go:81] duration metric: took 11.540399ms for pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.684054   15688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.692604   15688 pod_ready.go:92] pod "etcd-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.692627   15688 pod_ready.go:81] duration metric: took 8.564911ms for pod "etcd-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.692638   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.699066   15688 pod_ready.go:92] pod "kube-apiserver-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.699088   15688 pod_ready.go:81] duration metric: took 6.441568ms for pod "kube-apiserver-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.699099   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.710383   15688 pod_ready.go:92] pod "kube-controller-manager-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.710400   15688 pod_ready.go:81] duration metric: took 11.29407ms for pod "kube-controller-manager-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.710409   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wc47p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.825623   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:10.840505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:11.070437   15688 pod_ready.go:92] pod "kube-proxy-wc47p" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:11.070460   15688 pod_ready.go:81] duration metric: took 360.044521ms for pod "kube-proxy-wc47p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.070469   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.129429   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:11.134543   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:11.325359   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:11.340881   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:11.469648   15688 pod_ready.go:92] pod "kube-scheduler-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:11.469670   15688 pod_ready.go:81] duration metric: took 399.194726ms for pod "kube-scheduler-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.469678   15688 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.629616   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:11.634580   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:11.826206   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:11.842414   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:12.128819   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:12.135371   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:12.325990   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:12.340191   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:12.629551   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:12.637148   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:12.825982   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:12.840395   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:13.131192   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:13.134901   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:13.327437   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:13.341316   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:13.475585   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:13.629909   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:13.634713   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:13.827481   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:13.840638   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:14.130436   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:14.135213   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:14.327750   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:14.342578   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:14.628282   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:14.636261   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:14.827157   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:14.841004   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:15.129666   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:15.135484   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:15.327012   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:15.340817   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:15.480271   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:15.630553   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:15.642396   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:15.825563   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:15.840840   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:16.129494   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:16.134862   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:16.328145   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:16.341239   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:16.629033   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:16.635666   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:16.826137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:16.842413   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:17.128510   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:17.134310   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:17.639330   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:17.641936   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:17.644549   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:17.644668   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:17.646452   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:17.827109   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:17.839906   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:18.131250   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:18.136095   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:18.325536   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:18.340803   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:18.630421   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:18.641150   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:18.826391   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:18.841024   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.130520   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:19.135491   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:19.326800   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:19.340947   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.629697   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:19.634495   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:19.826880   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:19.840954   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.975444   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:20.129189   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:20.135023   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:20.327596   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:20.341178   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:20.632277   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:20.640854   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:20.829833   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:20.841120   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:21.129312   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:21.135146   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:21.325994   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:21.340037   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:21.638751   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:21.640930   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:21.825519   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:21.841394   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:22.129089   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:22.135230   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:22.327888   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:22.340872   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:22.477257   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:22.629938   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:22.637762   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:22.825272   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:22.840658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:23.130748   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:23.135616   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:23.327124   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:23.340567   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:23.629702   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:23.635908   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:23.825894   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:23.841191   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.128746   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:24.136152   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:24.327291   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:24.340673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.629773   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:24.635444   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:24.827411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:24.840534   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.976228   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:25.129524   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:25.134893   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:25.326446   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:25.340796   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:25.629365   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:25.637501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.402192   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:26.406597   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.407792   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:26.408915   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.411677   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:26.415652   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.629288   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:26.636419   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.830129   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.841005   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:27.128996   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:27.135247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:27.326005   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:27.340449   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:27.476896   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:27.630680   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:27.634738   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:27.826937   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:27.840335   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:28.129121   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:28.136173   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:28.328310   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:28.341964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:28.631168   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:28.634825   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:28.826349   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:28.842499   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.129599   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:29.135004   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:29.328498   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:29.341148   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.629606   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:29.635069   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:29.826876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:29.841991   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.974719   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:30.130126   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:30.140275   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:30.325918   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:30.341379   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:30.628915   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:30.635247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:30.825972   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:30.841008   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.129679   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:31.134765   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:31.326852   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:31.341006   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.629386   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:31.635007   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:31.826472   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:31.842581   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.981443   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:32.129242   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:32.135769   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:32.325505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:32.340407   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:32.629524   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:32.634756   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:32.824742   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:32.840440   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:33.128924   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:33.134758   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:33.325481   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:33.340100   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:33.629037   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:33.635190   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:33.826052   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:33.841685   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:34.128359   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:34.135283   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:34.327775   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:34.341317   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:34.475564   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:34.629621   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:34.635383   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:34.826833   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:34.842712   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:35.128988   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:35.134900   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:35.325499   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:35.340319   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:35.629419   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:35.635252   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:35.825864   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:35.841635   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.128448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:36.134236   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:36.326392   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:36.340896   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.629496   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:36.634506   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:36.827828   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:36.849174   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.975547   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:37.130540   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:37.134825   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:37.327677   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:37.340937   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:37.629546   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:37.635213   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:37.825812   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:37.841384   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:38.130158   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:38.141363   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:38.326495   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:38.342320   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:38.629165   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:38.635462   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:38.827806   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:38.840411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:39.128833   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:39.135888   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:39.325539   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:39.340789   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:39.478099   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:39.628973   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:39.634808   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:39.828979   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:39.841217   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:40.129266   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:40.135431   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:40.326112   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:40.342062   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:40.628890   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:40.635691   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:40.829492   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:40.841415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:41.129636   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:41.135254   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:41.327158   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:41.343083   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:41.480582   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:41.629429   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:41.634488   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:41.825415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:41.845010   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:42.128445   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:42.136433   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:42.326560   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:42.340936   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:42.628624   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:42.634483   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:42.828415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:42.842167   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.128983   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:43.134991   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:43.325774   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:43.343629   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.630166   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:43.642203   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:43.826455   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:43.842780   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.975623   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:44.130773   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:44.136753   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:44.325363   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:44.340172   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:44.628197   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:44.635016   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:44.825430   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:44.841480   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:45.128813   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:45.134615   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:45.325119   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:45.341332   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:45.630019   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:45.634645   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:45.827281   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:45.844284   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:46.123181   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:46.128042   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:46.136021   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:46.326013   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:46.340854   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:46.629077   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:46.635411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:46.827647   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:46.840477   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:47.129277   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:47.136137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:47.327790   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:47.343425   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:47.629574   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:47.635443   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:47.826178   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:47.840781   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.132576   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:48.139351   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:48.326805   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:48.341385   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.475655   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:48.629433   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:48.634673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:48.887233   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.896745   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.129278   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:49.141382   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:49.326073   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.345417   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:49.628578   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:49.636377   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:49.829768   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.840646   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:50.132725   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:50.145717   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:50.344159   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:50.346772   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:50.481980   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:50.628717   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:50.637554   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:50.826283   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:50.839945   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:51.129008   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:51.134814   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:51.325947   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:51.340011   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:51.628353   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:51.635095   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:51.825694   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:51.840445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.129368   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:52.137157   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:52.326504   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:52.340540   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.628391   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:52.635161   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:52.825928   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:52.840911   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.975304   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:53.129276   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:53.134387   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:53.327275   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:53.341067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:53.629664   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:53.635059   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:53.826103   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:53.840089   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.129403   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:54.135339   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:54.326160   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:54.341098   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.628626   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:54.635110   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:54.825638   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:54.840920   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.976126   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:55.128855   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:55.135486   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:55.326302   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:55.354256   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:55.629568   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:55.635361   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:55.825695   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:55.840548   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:56.129253   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:56.135445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:56.328838   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:56.340869   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:56.629079   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:56.634732   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:56.825627   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:56.841266   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:57.129251   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:57.135876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:57.327844   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:57.340616   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:57.475925   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:57.629230   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:57.635319   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:57.825964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:57.840573   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:58.132600   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:58.142104   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:58.326143   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:58.340419   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:58.629573   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:58.635694   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:58.826307   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:58.841345   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:59.129395   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:59.135165   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:59.332535   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:59.352873   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:59.796052   15688 kapi.go:107] duration metric: took 58.167361199s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 10:40:59.796748   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:59.800176   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:59.827984   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:59.841880   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:00.129686   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:00.338490   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:00.339869   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:00.629037   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:00.828205   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:00.843203   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.129448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:01.325824   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:01.341038   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.628663   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:01.825801   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:01.841104   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.976084   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:02.129188   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:02.326246   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:02.341316   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:02.633992   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:02.826176   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:02.841389   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:03.129858   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:03.325032   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:03.340708   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:03.629836   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:03.825797   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:03.840983   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:04.129495   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:04.330876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:04.350196   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:04.475180   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:04.629092   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:04.825338   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:04.840538   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:05.129237   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:05.325247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:05.340168   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:05.631124   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:05.826821   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:05.841336   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:06.131560   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:06.337084   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:06.351680   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:06.481809   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:06.628040   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:06.830200   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:06.840323   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:07.128334   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:07.325678   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:07.341177   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:07.628810   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:07.827618   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:07.841002   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.131072   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:08.326603   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:08.341237   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.631059   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:08.832500   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:08.840434   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.977459   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:09.130397   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:09.326201   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:09.342569   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:09.629886   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:09.826229   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:09.840149   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.128759   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:10.325673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:10.340707   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.628816   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:10.826811   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:10.841339   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.977780   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:11.128454   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:11.327375   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:11.340593   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:11.628668   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:11.825820   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:11.840516   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:12.130906   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:12.741967   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:12.742034   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:12.745893   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:12.826186   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:12.840445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:13.128987   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:13.326928   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:13.340345   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:13.475518   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:13.629130   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:13.825690   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:13.841268   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:14.128939   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:14.325361   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:14.340314   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:14.629239   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:14.828987   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:14.846710   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:15.129087   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:15.325909   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:15.341413   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:15.478787   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:15.629448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:15.826033   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:15.842079   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:16.149641   15688 kapi.go:107] duration metric: took 1m14.524974589s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 10:41:16.329501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:16.341596   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:16.826047   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:16.840410   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.325501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:17.340621   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.825646   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:17.840635   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.976578   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:18.325555   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:18.340684   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:18.825925   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:18.839718   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:19.325964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:19.339834   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:19.825766   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:19.840578   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:20.326143   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:20.340592   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:20.475743   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:20.825935   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:20.840210   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:21.332189   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:21.340994   15688 kapi.go:107] duration metric: took 1m16.003829036s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 10:41:21.342646   15688 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-926744 cluster.
	I0603 10:41:21.344099   15688 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 10:41:21.345330   15688 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 10:41:21.825816   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:22.326505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:22.483600   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:22.825431   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:23.327541   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:23.826799   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.339560   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.831668   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.984779   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:25.326088   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:25.998518   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:26.326335   15688 kapi.go:107] duration metric: took 1m23.005982669s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 10:41:26.328081   15688 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, storage-provisioner, metrics-server, ingress-dns, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0603 10:41:26.329460   15688 addons.go:510] duration metric: took 1m33.383589678s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget storage-provisioner metrics-server ingress-dns helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0603 10:41:27.478177   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:29.478428   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:31.483747   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:33.977262   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:36.476890   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:38.975796   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:41.476456   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:43.484228   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:45.976154   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:48.476134   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:50.975934   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:53.478367   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:55.976681   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:58.476504   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:59.477100   15688 pod_ready.go:92] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"True"
	I0603 10:41:59.477121   15688 pod_ready.go:81] duration metric: took 1m48.007436943s for pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.477131   15688 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.481313   15688 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace has status "Ready":"True"
	I0603 10:41:59.481330   15688 pod_ready.go:81] duration metric: took 4.193913ms for pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.481350   15688 pod_ready.go:38] duration metric: took 1m57.857362978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:41:59.481364   15688 api_server.go:52] waiting for apiserver process to appear ...
	I0603 10:41:59.481405   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:41:59.481454   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:41:59.576247   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:41:59.576271   15688 cri.go:89] found id: ""
	I0603 10:41:59.576280   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:41:59.576338   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.580743   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:41:59.580799   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:41:59.629996   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:41:59.630019   15688 cri.go:89] found id: ""
	I0603 10:41:59.630027   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:41:59.630080   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.637789   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:41:59.637854   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:41:59.680848   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:41:59.680872   15688 cri.go:89] found id: ""
	I0603 10:41:59.680880   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:41:59.680931   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.685110   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:41:59.685161   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:41:59.724139   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:41:59.724164   15688 cri.go:89] found id: ""
	I0603 10:41:59.724175   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:41:59.724228   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.728217   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:41:59.728274   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:41:59.766380   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:41:59.766401   15688 cri.go:89] found id: ""
	I0603 10:41:59.766408   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:41:59.766451   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.772251   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:41:59.772318   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:41:59.819207   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:41:59.819231   15688 cri.go:89] found id: ""
	I0603 10:41:59.819240   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:41:59.819292   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.823970   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:41:59.824026   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:41:59.861614   15688 cri.go:89] found id: ""
	I0603 10:41:59.861645   15688 logs.go:276] 0 containers: []
	W0603 10:41:59.861655   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:41:59.861666   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:41:59.861685   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:41:59.921615   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:41:59.921647   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:41:59.969437   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:41:59.969472   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:00.051950   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:00.051984   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:00.068389   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:00.068425   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:00.107686   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:00.107715   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:00.148614   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:00.148644   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:00.194053   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:00.194083   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:00.320720   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:00.320746   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:00.370323   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:00.370351   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:00.422239   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:00.422271   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:03.562052   15688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 10:42:03.581879   15688 api_server.go:72] duration metric: took 2m10.636034699s to wait for apiserver process to appear ...
	I0603 10:42:03.581909   15688 api_server.go:88] waiting for apiserver healthz status ...
	I0603 10:42:03.581944   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:42:03.582007   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:42:03.621522   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:03.621555   15688 cri.go:89] found id: ""
	I0603 10:42:03.621565   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:42:03.621625   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.626419   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:42:03.626485   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:42:03.664347   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:03.664371   15688 cri.go:89] found id: ""
	I0603 10:42:03.664379   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:42:03.664430   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.668277   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:42:03.668334   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:42:03.706121   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:03.706142   15688 cri.go:89] found id: ""
	I0603 10:42:03.706151   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:42:03.706199   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.709966   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:42:03.710012   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:42:03.747510   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:03.747534   15688 cri.go:89] found id: ""
	I0603 10:42:03.747541   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:42:03.747579   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.751534   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:42:03.751590   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:42:03.797244   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:03.797271   15688 cri.go:89] found id: ""
	I0603 10:42:03.797281   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:42:03.797340   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.801747   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:42:03.801810   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:42:03.838254   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:03.838281   15688 cri.go:89] found id: ""
	I0603 10:42:03.838290   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:42:03.838339   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.842636   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:42:03.842702   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:42:03.880139   15688 cri.go:89] found id: ""
	I0603 10:42:03.880167   15688 logs.go:276] 0 containers: []
	W0603 10:42:03.880177   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:42:03.880187   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:42:03.880199   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:03.968375   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:03.968417   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:03.983683   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:03.983718   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:04.031500   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:04.031532   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:04.069096   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:04.069133   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:04.803447   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:04.803494   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:04.916452   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:04.916491   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:04.968508   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:04.968533   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:05.028125   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:05.028155   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:05.067620   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:42:05.067646   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:05.135035   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:42:05.135072   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:42:07.693030   15688 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0603 10:42:07.698222   15688 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I0603 10:42:07.699367   15688 api_server.go:141] control plane version: v1.30.1
	I0603 10:42:07.699389   15688 api_server.go:131] duration metric: took 4.117473615s to wait for apiserver health ...
	I0603 10:42:07.699396   15688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 10:42:07.699415   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:42:07.699457   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:42:07.743217   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:07.743243   15688 cri.go:89] found id: ""
	I0603 10:42:07.743251   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:42:07.743291   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.747333   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:42:07.747379   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:42:07.785630   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:07.785648   15688 cri.go:89] found id: ""
	I0603 10:42:07.785654   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:42:07.785693   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.790018   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:42:07.790067   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:42:07.831428   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:07.831448   15688 cri.go:89] found id: ""
	I0603 10:42:07.831455   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:42:07.831495   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.835559   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:42:07.835610   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:42:07.884774   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:07.884801   15688 cri.go:89] found id: ""
	I0603 10:42:07.884811   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:42:07.884858   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.889297   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:42:07.889345   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:42:07.926750   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:07.926772   15688 cri.go:89] found id: ""
	I0603 10:42:07.926781   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:42:07.926825   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.930781   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:42:07.930837   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:42:07.968207   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:07.968227   15688 cri.go:89] found id: ""
	I0603 10:42:07.968234   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:42:07.968288   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.973021   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:42:07.973088   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:42:08.012257   15688 cri.go:89] found id: ""
	I0603 10:42:08.012280   15688 logs.go:276] 0 containers: []
	W0603 10:42:08.012287   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:42:08.012296   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:08.012312   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:08.140174   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:08.140197   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:08.179355   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:08.179393   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:08.228118   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:08.228146   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:08.272516   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:08.272544   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:08.308487   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:42:08.308511   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:08.373386   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:08.373414   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:09.218176   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:42:09.218213   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:09.301241   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:09.301288   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:09.321049   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:09.321083   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:09.380064   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:42:09.380094   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:42:11.947596   15688 system_pods.go:59] 18 kube-system pods found
	I0603 10:42:11.947626   15688 system_pods.go:61] "coredns-7db6d8ff4d-x6wn8" [92e13ca5-45f1-4604-a816-b890269a86e9] Running
	I0603 10:42:11.947631   15688 system_pods.go:61] "csi-hostpath-attacher-0" [6f6ae728-2676-48ad-a8bb-c277fafb0fc5] Running
	I0603 10:42:11.947636   15688 system_pods.go:61] "csi-hostpath-resizer-0" [241ed1e6-7eea-41e5-a1f5-df7de8ba25ba] Running
	I0603 10:42:11.947639   15688 system_pods.go:61] "csi-hostpathplugin-rkcvf" [5bc77713-f4d6-478a-bce8-b0197f258ad0] Running
	I0603 10:42:11.947643   15688 system_pods.go:61] "etcd-addons-926744" [556219f4-7461-4935-abf0-a63c9923ca5c] Running
	I0603 10:42:11.947646   15688 system_pods.go:61] "kube-apiserver-addons-926744" [977aebf3-f958-46ef-bee0-014cecbb238f] Running
	I0603 10:42:11.947649   15688 system_pods.go:61] "kube-controller-manager-addons-926744" [566a0d21-83dd-4c4e-9ac0-461af574eb5f] Running
	I0603 10:42:11.947651   15688 system_pods.go:61] "kube-ingress-dns-minikube" [b2df4538-5f55-4952-9579-2cf3d39182c2] Running
	I0603 10:42:11.947654   15688 system_pods.go:61] "kube-proxy-wc47p" [a4052b1a-d14e-4679-8c52-6ebf348b3900] Running
	I0603 10:42:11.947657   15688 system_pods.go:61] "kube-scheduler-addons-926744" [c84ac4e4-3010-4816-9b5a-3cf331ca3f19] Running
	I0603 10:42:11.947660   15688 system_pods.go:61] "metrics-server-c59844bb4-gsd5w" [23f016d5-3265-4e2c-abb2-940fc0259aab] Running
	I0603 10:42:11.947663   15688 system_pods.go:61] "nvidia-device-plugin-daemonset-xsjk2" [6e714474-e47d-438a-8c5f-6f4fc07169af] Running
	I0603 10:42:11.947666   15688 system_pods.go:61] "registry-proxy-mhm9h" [28fbb401-9bee-4e8b-98e2-67e9fbcc54d4] Running
	I0603 10:42:11.947669   15688 system_pods.go:61] "registry-v8sfs" [ae4c2ffe-ab57-4327-a6c0-25504bcd327b] Running
	I0603 10:42:11.947673   15688 system_pods.go:61] "snapshot-controller-745499f584-vbr9k" [f9cdeeee-e6e5-448b-b16f-2672c1794671] Running
	I0603 10:42:11.947676   15688 system_pods.go:61] "snapshot-controller-745499f584-zjct2" [9ad43034-4603-4931-b3c3-fcbe981ba9fa] Running
	I0603 10:42:11.947681   15688 system_pods.go:61] "storage-provisioner" [6d7d74e2-9171-42f1-8cc1-f1708d0d6470] Running
	I0603 10:42:11.947684   15688 system_pods.go:61] "tiller-deploy-6677d64bcd-9kcxj" [fc636068-af58-4546-9600-7cee9712ca32] Running
	I0603 10:42:11.947690   15688 system_pods.go:74] duration metric: took 4.248289497s to wait for pod list to return data ...
	I0603 10:42:11.947700   15688 default_sa.go:34] waiting for default service account to be created ...
	I0603 10:42:11.950145   15688 default_sa.go:45] found service account: "default"
	I0603 10:42:11.950167   15688 default_sa.go:55] duration metric: took 2.458234ms for default service account to be created ...
	I0603 10:42:11.950174   15688 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 10:42:11.957587   15688 system_pods.go:86] 18 kube-system pods found
	I0603 10:42:11.957614   15688 system_pods.go:89] "coredns-7db6d8ff4d-x6wn8" [92e13ca5-45f1-4604-a816-b890269a86e9] Running
	I0603 10:42:11.957620   15688 system_pods.go:89] "csi-hostpath-attacher-0" [6f6ae728-2676-48ad-a8bb-c277fafb0fc5] Running
	I0603 10:42:11.957625   15688 system_pods.go:89] "csi-hostpath-resizer-0" [241ed1e6-7eea-41e5-a1f5-df7de8ba25ba] Running
	I0603 10:42:11.957628   15688 system_pods.go:89] "csi-hostpathplugin-rkcvf" [5bc77713-f4d6-478a-bce8-b0197f258ad0] Running
	I0603 10:42:11.957633   15688 system_pods.go:89] "etcd-addons-926744" [556219f4-7461-4935-abf0-a63c9923ca5c] Running
	I0603 10:42:11.957637   15688 system_pods.go:89] "kube-apiserver-addons-926744" [977aebf3-f958-46ef-bee0-014cecbb238f] Running
	I0603 10:42:11.957641   15688 system_pods.go:89] "kube-controller-manager-addons-926744" [566a0d21-83dd-4c4e-9ac0-461af574eb5f] Running
	I0603 10:42:11.957650   15688 system_pods.go:89] "kube-ingress-dns-minikube" [b2df4538-5f55-4952-9579-2cf3d39182c2] Running
	I0603 10:42:11.957654   15688 system_pods.go:89] "kube-proxy-wc47p" [a4052b1a-d14e-4679-8c52-6ebf348b3900] Running
	I0603 10:42:11.957658   15688 system_pods.go:89] "kube-scheduler-addons-926744" [c84ac4e4-3010-4816-9b5a-3cf331ca3f19] Running
	I0603 10:42:11.957662   15688 system_pods.go:89] "metrics-server-c59844bb4-gsd5w" [23f016d5-3265-4e2c-abb2-940fc0259aab] Running
	I0603 10:42:11.957667   15688 system_pods.go:89] "nvidia-device-plugin-daemonset-xsjk2" [6e714474-e47d-438a-8c5f-6f4fc07169af] Running
	I0603 10:42:11.957671   15688 system_pods.go:89] "registry-proxy-mhm9h" [28fbb401-9bee-4e8b-98e2-67e9fbcc54d4] Running
	I0603 10:42:11.957676   15688 system_pods.go:89] "registry-v8sfs" [ae4c2ffe-ab57-4327-a6c0-25504bcd327b] Running
	I0603 10:42:11.957683   15688 system_pods.go:89] "snapshot-controller-745499f584-vbr9k" [f9cdeeee-e6e5-448b-b16f-2672c1794671] Running
	I0603 10:42:11.957687   15688 system_pods.go:89] "snapshot-controller-745499f584-zjct2" [9ad43034-4603-4931-b3c3-fcbe981ba9fa] Running
	I0603 10:42:11.957695   15688 system_pods.go:89] "storage-provisioner" [6d7d74e2-9171-42f1-8cc1-f1708d0d6470] Running
	I0603 10:42:11.957699   15688 system_pods.go:89] "tiller-deploy-6677d64bcd-9kcxj" [fc636068-af58-4546-9600-7cee9712ca32] Running
	I0603 10:42:11.957704   15688 system_pods.go:126] duration metric: took 7.525434ms to wait for k8s-apps to be running ...
	I0603 10:42:11.957709   15688 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 10:42:11.957752   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:42:11.973332   15688 system_svc.go:56] duration metric: took 15.613734ms WaitForService to wait for kubelet
	I0603 10:42:11.973362   15688 kubeadm.go:576] duration metric: took 2m19.027522983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:42:11.973385   15688 node_conditions.go:102] verifying NodePressure condition ...
	I0603 10:42:11.976121   15688 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:42:11.976145   15688 node_conditions.go:123] node cpu capacity is 2
	I0603 10:42:11.976157   15688 node_conditions.go:105] duration metric: took 2.766495ms to run NodePressure ...
	I0603 10:42:11.976167   15688 start.go:240] waiting for startup goroutines ...
	I0603 10:42:11.976174   15688 start.go:245] waiting for cluster config update ...
	I0603 10:42:11.976188   15688 start.go:254] writing updated cluster config ...
	I0603 10:42:11.976441   15688 ssh_runner.go:195] Run: rm -f paused
	I0603 10:42:12.026495   15688 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 10:42:12.028649   15688 out.go:177] * Done! kubectl is now configured to use "addons-926744" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 10:44:56 addons-926744 crio[677]: time="2024-06-03 10:44:56.960776194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411496960750472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e4cb992-b926-40bf-94f5-e2cbe9610441 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:56 addons-926744 crio[677]: time="2024-06-03 10:44:56.961521707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec2e8c17-354d-4cad-8e3e-ed7ee52c2c37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:56 addons-926744 crio[677]: time="2024-06-03 10:44:56.961577420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec2e8c17-354d-4cad-8e3e-ed7ee52c2c37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:56 addons-926744 crio[677]: time="2024-06-03 10:44:56.961900752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b59b71a1d26052c219e71e24e321e98f0ac5b95a20562143df8a91fb69e2eeb2,PodSandboxId:4ed88311dfb80de60179b69025aecd80eba1a00b3f94f0d678d0c59d08832d97,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717411260810432279,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtlc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c1dc452-a980-4790-9ee7-7ab2ed6abd02,},Annotations:map[string]string{io.kubernetes.container.hash: b33eaec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6587ac85191b0c6c3ffb405d2196476db864e96635b8871d5c9f8dcca04fe28c,PodSandboxId:dcdf3f5dffd0fbe75d8c7abd810d3bb1bd729bcfd00be2c8d4dab5b5b619c105,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6
175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717411260693820291,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dndnc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 10576e58-28de-4816-8ecb-ee3277edc1c9,},Annotations:map[string]string{io.kubernetes.container.hash: c6f2f59e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d
208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5b
be66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033d
b3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0
182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
ec2e8c17-354d-4cad-8e3e-ed7ee52c2c37 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:56 addons-926744 crio[677]: time="2024-06-03 10:44:56.999833933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd9ab18-f490-479e-9348-dbcfbb900c4b name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.000001772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd9ab18-f490-479e-9348-dbcfbb900c4b name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.001442252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc86ddc4-b8d1-4fa7-bfbb-a16f660df165 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.002764653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411497002736989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc86ddc4-b8d1-4fa7-bfbb-a16f660df165 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.003269000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2523342-d102-42b0-bac9-41169eb6767f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.003325598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2523342-d102-42b0-bac9-41169eb6767f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.003846636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b59b71a1d26052c219e71e24e321e98f0ac5b95a20562143df8a91fb69e2eeb2,PodSandboxId:4ed88311dfb80de60179b69025aecd80eba1a00b3f94f0d678d0c59d08832d97,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717411260810432279,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtlc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c1dc452-a980-4790-9ee7-7ab2ed6abd02,},Annotations:map[string]string{io.kubernetes.container.hash: b33eaec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6587ac85191b0c6c3ffb405d2196476db864e96635b8871d5c9f8dcca04fe28c,PodSandboxId:dcdf3f5dffd0fbe75d8c7abd810d3bb1bd729bcfd00be2c8d4dab5b5b619c105,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6
175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717411260693820291,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dndnc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 10576e58-28de-4816-8ecb-ee3277edc1c9,},Annotations:map[string]string{io.kubernetes.container.hash: c6f2f59e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d
208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5b
be66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033d
b3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0
182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
b2523342-d102-42b0-bac9-41169eb6767f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.036201508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1be3234-f21a-4d03-be32-e98ee56e4d65 name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.036272780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1be3234-f21a-4d03-be32-e98ee56e4d65 name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.038165621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e046ca0-1e05-4c4e-8666-e1fa16691a41 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.039432089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411497039408577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e046ca0-1e05-4c4e-8666-e1fa16691a41 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.040011968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4b2ecc5-42f1-4e10-8aca-6ca7c589407b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.040111769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4b2ecc5-42f1-4e10-8aca-6ca7c589407b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.040444020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b59b71a1d26052c219e71e24e321e98f0ac5b95a20562143df8a91fb69e2eeb2,PodSandboxId:4ed88311dfb80de60179b69025aecd80eba1a00b3f94f0d678d0c59d08832d97,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717411260810432279,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtlc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c1dc452-a980-4790-9ee7-7ab2ed6abd02,},Annotations:map[string]string{io.kubernetes.container.hash: b33eaec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6587ac85191b0c6c3ffb405d2196476db864e96635b8871d5c9f8dcca04fe28c,PodSandboxId:dcdf3f5dffd0fbe75d8c7abd810d3bb1bd729bcfd00be2c8d4dab5b5b619c105,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6
175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717411260693820291,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dndnc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 10576e58-28de-4816-8ecb-ee3277edc1c9,},Annotations:map[string]string{io.kubernetes.container.hash: c6f2f59e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d
208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5b
be66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033d
b3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0
182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
a4b2ecc5-42f1-4e10-8aca-6ca7c589407b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.078521153Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c15caf93-2852-43d5-9d42-f8607d9ca46d name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.078593176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c15caf93-2852-43d5-9d42-f8607d9ca46d name=/runtime.v1.RuntimeService/Version
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.079404480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bd1c533-006d-4efa-a047-b8c95140c2b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.080726028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411497080702077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bd1c533-006d-4efa-a047-b8c95140c2b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.081312034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64ae5b01-fb88-417e-ab22-2829b026df7d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.081370255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64ae5b01-fb88-417e-ab22-2829b026df7d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:44:57 addons-926744 crio[677]: time="2024-06-03 10:44:57.081681829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b59b71a1d26052c219e71e24e321e98f0ac5b95a20562143df8a91fb69e2eeb2,PodSandboxId:4ed88311dfb80de60179b69025aecd80eba1a00b3f94f0d678d0c59d08832d97,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717411260810432279,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtlc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c1dc452-a980-4790-9ee7-7ab2ed6abd02,},Annotations:map[string]string{io.kubernetes.container.hash: b33eaec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6587ac85191b0c6c3ffb405d2196476db864e96635b8871d5c9f8dcca04fe28c,PodSandboxId:dcdf3f5dffd0fbe75d8c7abd810d3bb1bd729bcfd00be2c8d4dab5b5b619c105,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6
175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717411260693820291,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dndnc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 10576e58-28de-4816-8ecb-ee3277edc1c9,},Annotations:map[string]string{io.kubernetes.container.hash: c6f2f59e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d
208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5b
be66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033d
b3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0
182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
64ae5b01-fb88-417e-ab22-2829b026df7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74b8be293d0eb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   3525cac8b28df       hello-world-app-86c47465fc-ksqv6
	1ad5d525fcd5d       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                              2 minutes ago       Running             nginx                     0                   5c5a819be7c6f       nginx
	042cb7022a28f       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        2 minutes ago       Running             headlamp                  0                   d782ee8880817       headlamp-68456f997b-7jxcw
	2ca4f0b5927ce       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   d60392e614597       gcp-auth-5db96cd9b4-zspc9
	b59b71a1d2605       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   4ed88311dfb80       ingress-nginx-admission-patch-qtlc7
	6587ac85191b0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   dcdf3f5dffd0f       ingress-nginx-admission-create-dndnc
	9712bf2d29de5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   af68176d576dc       metrics-server-c59844bb4-gsd5w
	47c3ea3f2517e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   779b9dd0801e2       yakd-dashboard-5ddbf7d777-ljsqm
	63c5c0dcb78f9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   626e221da3537       local-path-provisioner-8d985888d-hkptp
	f9e5a5b781a69       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d7d998ba525a6       storage-provisioner
	3491475c959d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   687420cda82ff       coredns-7db6d8ff4d-x6wn8
	3e2009a9b8f4f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             5 minutes ago       Running             kube-proxy                0                   b800364548168       kube-proxy-wc47p
	5b548a14c5e64       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             5 minutes ago       Running             kube-controller-manager   0                   5243863130f20       kube-controller-manager-addons-926744
	0ffe7b014e84d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   3ef4f4f68cc5b       etcd-addons-926744
	b62b083e9d8cd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             5 minutes ago       Running             kube-scheduler            0                   76e012f1fc0de       kube-scheduler-addons-926744
	d1b4710df7b69       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             5 minutes ago       Running             kube-apiserver            0                   8b0af8494513c       kube-apiserver-addons-926744
	
	
	==> coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] <==
	[INFO] 10.244.0.8:46405 - 2599 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00069742s
	[INFO] 10.244.0.8:59632 - 55811 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008446s
	[INFO] 10.244.0.8:59632 - 16389 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046278s
	[INFO] 10.244.0.8:41084 - 40728 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103929s
	[INFO] 10.244.0.8:41084 - 25369 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054601s
	[INFO] 10.244.0.8:51579 - 59806 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097293s
	[INFO] 10.244.0.8:51579 - 39320 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053938s
	[INFO] 10.244.0.8:47113 - 63010 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072883s
	[INFO] 10.244.0.8:47113 - 38438 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040249s
	[INFO] 10.244.0.8:44584 - 27734 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082578s
	[INFO] 10.244.0.8:44584 - 3416 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073322s
	[INFO] 10.244.0.8:48219 - 62114 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030171s
	[INFO] 10.244.0.8:48219 - 38572 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026203s
	[INFO] 10.244.0.8:41993 - 56150 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029028s
	[INFO] 10.244.0.8:41993 - 17496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031722s
	[INFO] 10.244.0.22:43589 - 37743 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000220325s
	[INFO] 10.244.0.22:55505 - 8939 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181994s
	[INFO] 10.244.0.22:51042 - 65201 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159073s
	[INFO] 10.244.0.22:57129 - 19016 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159411s
	[INFO] 10.244.0.22:39124 - 18032 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090881s
	[INFO] 10.244.0.22:56252 - 24240 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067047s
	[INFO] 10.244.0.22:51039 - 44305 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000476548s
	[INFO] 10.244.0.22:38956 - 52100 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000605244s
	[INFO] 10.244.0.25:60113 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00157132s
	[INFO] 10.244.0.25:51687 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097573s
	
	
	==> describe nodes <==
	Name:               addons-926744
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-926744
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=addons-926744
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_39_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-926744
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:39:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-926744
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 10:44:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 10:43:14 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 10:43:14 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 10:43:14 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 10:43:14 +0000   Mon, 03 Jun 2024 10:39:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    addons-926744
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c36fa0042ae4fdaaa827e1bb0dda654
	  System UUID:                6c36fa00-42ae-4fda-aa82-7e1bb0dda654
	  Boot ID:                    76ff8c64-2020-47c1-945c-0f6fed458973
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-ksqv6          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-zspc9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  headlamp                    headlamp-68456f997b-7jxcw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-7db6d8ff4d-x6wn8                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m4s
	  kube-system                 etcd-addons-926744                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-apiserver-addons-926744              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-addons-926744     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-proxy-wc47p                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-addons-926744              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 metrics-server-c59844bb4-gsd5w            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  local-path-storage          local-path-provisioner-8d985888d-hkptp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-ljsqm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m59s  kube-proxy       
	  Normal  Starting                 5m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m18s  kubelet          Node addons-926744 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s  kubelet          Node addons-926744 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s  kubelet          Node addons-926744 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m17s  kubelet          Node addons-926744 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node addons-926744 event: Registered Node addons-926744 in Controller
	
	
	==> dmesg <==
	[Jun 3 10:40] kauditd_printk_skb: 117 callbacks suppressed
	[  +6.699088] kauditd_printk_skb: 90 callbacks suppressed
	[ +10.962568] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.875540] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.181729] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.770466] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.146041] kauditd_printk_skb: 9 callbacks suppressed
	[Jun 3 10:41] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.032288] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.415865] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.561939] kauditd_printk_skb: 11 callbacks suppressed
	[ +36.747947] kauditd_printk_skb: 45 callbacks suppressed
	[Jun 3 10:42] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.402673] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.676627] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.101924] kauditd_printk_skb: 58 callbacks suppressed
	[  +7.081833] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.193017] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.039997] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.121660] kauditd_printk_skb: 23 callbacks suppressed
	[Jun 3 10:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.291568] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.225720] kauditd_printk_skb: 33 callbacks suppressed
	[Jun 3 10:44] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.901298] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] <==
	{"level":"warn","ts":"2024-06-03T10:41:25.948571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.417871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85553"}
	{"level":"info","ts":"2024-06-03T10:41:25.948696Z","caller":"traceutil/trace.go:171","msg":"trace[795847321] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1178; }","duration":"136.798984ms","start":"2024-06-03T10:41:25.811888Z","end":"2024-06-03T10:41:25.948687Z","steps":["trace[795847321] 'agreement among raft nodes before linearized reading'  (duration: 131.11305ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.234173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.227279ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3735894073187871367 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.188\" mod_revision:1274 > success:<request_put:<key:\"/registry/masterleases/192.168.39.188\" value_size:67 lease:3735894073187871365 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.188\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T10:42:19.23434Z","caller":"traceutil/trace.go:171","msg":"trace[55051779] linearizableReadLoop","detail":"{readStateIndex:1371; appliedIndex:1370; }","duration":"361.367807ms","start":"2024-06-03T10:42:18.872959Z","end":"2024-06-03T10:42:19.234327Z","steps":["trace[55051779] 'read index received'  (duration: 186.25985ms)","trace[55051779] 'applied index is now lower than readState.Index'  (duration: 175.106743ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T10:42:19.234413Z","caller":"traceutil/trace.go:171","msg":"trace[383761922] transaction","detail":"{read_only:false; response_revision:1322; number_of_response:1; }","duration":"478.029051ms","start":"2024-06-03T10:42:18.756377Z","end":"2024-06-03T10:42:19.234406Z","steps":["trace[383761922] 'process raft request'  (duration: 302.879922ms)","trace[383761922] 'compare'  (duration: 171.101588ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T10:42:19.234449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.756356Z","time spent":"478.072045ms","remote":"127.0.0.1:38444","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.188\" mod_revision:1274 > success:<request_put:<key:\"/registry/masterleases/192.168.39.188\" value_size:67 lease:3735894073187871365 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.188\" > >"}
	{"level":"warn","ts":"2024-06-03T10:42:19.234893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.918572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T10:42:19.234938Z","caller":"traceutil/trace.go:171","msg":"trace[1147656126] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1322; }","duration":"362.019511ms","start":"2024-06-03T10:42:18.872911Z","end":"2024-06-03T10:42:19.234931Z","steps":["trace[1147656126] 'agreement among raft nodes before linearized reading'  (duration: 361.789897ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.234963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.872898Z","time spent":"362.058529ms","remote":"127.0.0.1:38664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":13,"response size":29,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	{"level":"warn","ts":"2024-06-03T10:42:19.235146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.801009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-03T10:42:19.23526Z","caller":"traceutil/trace.go:171","msg":"trace[1351488970] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1322; }","duration":"330.938755ms","start":"2024-06-03T10:42:18.90431Z","end":"2024-06-03T10:42:19.235249Z","steps":["trace[1351488970] 'agreement among raft nodes before linearized reading'  (duration: 330.752892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237557Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.904297Z","time spent":"333.249021ms","remote":"127.0.0.1:38586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3988,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"warn","ts":"2024-06-03T10:42:19.237166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.536055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88197"}
	{"level":"info","ts":"2024-06-03T10:42:19.237649Z","caller":"traceutil/trace.go:171","msg":"trace[920582780] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1322; }","duration":"180.042208ms","start":"2024-06-03T10:42:19.057597Z","end":"2024-06-03T10:42:19.23764Z","steps":["trace[920582780] 'agreement among raft nodes before linearized reading'  (duration: 177.591445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.875085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T10:42:19.237748Z","caller":"traceutil/trace.go:171","msg":"trace[1755845588] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1322; }","duration":"168.377536ms","start":"2024-06-03T10:42:19.069364Z","end":"2024-06-03T10:42:19.237741Z","steps":["trace[1755845588] 'agreement among raft nodes before linearized reading'  (duration: 167.883806ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.346201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-03T10:42:19.23783Z","caller":"traceutil/trace.go:171","msg":"trace[2031929711] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1322; }","duration":"192.880008ms","start":"2024-06-03T10:42:19.044942Z","end":"2024-06-03T10:42:19.237822Z","steps":["trace[2031929711] 'agreement among raft nodes before linearized reading'  (duration: 192.331536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.967805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88197"}
	{"level":"info","ts":"2024-06-03T10:42:19.237912Z","caller":"traceutil/trace.go:171","msg":"trace[995480990] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1322; }","duration":"202.476713ms","start":"2024-06-03T10:42:19.03543Z","end":"2024-06-03T10:42:19.237907Z","steps":["trace[995480990] 'agreement among raft nodes before linearized reading'  (duration: 201.893407ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:01.609591Z","caller":"traceutil/trace.go:171","msg":"trace[1466340048] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"100.128518ms","start":"2024-06-03T10:43:01.50939Z","end":"2024-06-03T10:43:01.609518Z","steps":["trace[1466340048] 'process raft request'  (duration: 100.001045ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:04.376641Z","caller":"traceutil/trace.go:171","msg":"trace[30401139] linearizableReadLoop","detail":"{readStateIndex:1652; appliedIndex:1651; }","duration":"242.924429ms","start":"2024-06-03T10:43:04.133701Z","end":"2024-06-03T10:43:04.376626Z","steps":["trace[30401139] 'read index received'  (duration: 240.969761ms)","trace[30401139] 'applied index is now lower than readState.Index'  (duration: 1.953384ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T10:43:04.376838Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.109231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-06-03T10:43:04.376875Z","caller":"traceutil/trace.go:171","msg":"trace[1584292141] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1587; }","duration":"243.200249ms","start":"2024-06-03T10:43:04.133668Z","end":"2024-06-03T10:43:04.376869Z","steps":["trace[1584292141] 'agreement among raft nodes before linearized reading'  (duration: 243.033556ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:09.785731Z","caller":"traceutil/trace.go:171","msg":"trace[762528521] transaction","detail":"{read_only:false; response_revision:1597; number_of_response:1; }","duration":"143.161071ms","start":"2024-06-03T10:43:09.642553Z","end":"2024-06-03T10:43:09.785714Z","steps":["trace[762528521] 'process raft request'  (duration: 143.0512ms)"],"step_count":1}
	
	
	==> gcp-auth [2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3] <==
	2024/06/03 10:41:20 GCP Auth Webhook started!
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:17 Ready to marshal response ...
	2024/06/03 10:42:17 Ready to write response ...
	2024/06/03 10:42:22 Ready to marshal response ...
	2024/06/03 10:42:22 Ready to write response ...
	2024/06/03 10:42:26 Ready to marshal response ...
	2024/06/03 10:42:26 Ready to write response ...
	2024/06/03 10:42:37 Ready to marshal response ...
	2024/06/03 10:42:37 Ready to write response ...
	2024/06/03 10:42:37 Ready to marshal response ...
	2024/06/03 10:42:37 Ready to write response ...
	2024/06/03 10:42:50 Ready to marshal response ...
	2024/06/03 10:42:50 Ready to write response ...
	2024/06/03 10:42:57 Ready to marshal response ...
	2024/06/03 10:42:57 Ready to write response ...
	2024/06/03 10:43:21 Ready to marshal response ...
	2024/06/03 10:43:21 Ready to write response ...
	2024/06/03 10:44:46 Ready to marshal response ...
	2024/06/03 10:44:46 Ready to write response ...
	
	
	==> kernel <==
	 10:44:57 up 5 min,  0 users,  load average: 1.22, 1.46, 0.77
	Linux addons-926744 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] <==
	E0603 10:41:59.335492       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.152.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.152.243:443: connect: connection refused
	I0603 10:41:59.399709       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0603 10:42:12.833769       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.220.86"}
	I0603 10:42:19.239008       1 trace.go:236] Trace[1772787794]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.188,type:*v1.Endpoints,resource:apiServerIPInfo (03-Jun-2024 10:42:18.727) (total time: 511ms):
	Trace[1772787794]: ---"Txn call completed" 482ms (10:42:19.238)
	Trace[1772787794]: [511.825955ms] [511.825955ms] END
	I0603 10:42:26.202443       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0603 10:42:26.467009       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.99.57"}
	I0603 10:42:32.281625       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0603 10:42:33.315245       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0603 10:43:12.196124       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0603 10:43:38.252881       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.252939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.278762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.278824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.288219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.288544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.302199       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.302983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.324138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.324181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0603 10:43:39.301750       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0603 10:43:39.324780       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0603 10:43:39.328496       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0603 10:44:46.434734       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.179.199"}
	
	
	==> kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] <==
	W0603 10:43:58.621971       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:43:58.622056       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:44:02.413696       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:02.413833       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:44:14.547124       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:14.547275       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:44:15.258837       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:15.258933       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:44:20.685939       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:20.686364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:44:41.247578       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:41.247815       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 10:44:46.277345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="33.268669ms"
	I0603 10:44:46.304218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.472745ms"
	I0603 10:44:46.317793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="13.399593ms"
	I0603 10:44:46.317978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.089µs"
	W0603 10:44:48.623970       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:48.624051       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 10:44:49.271439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.264µs"
	I0603 10:44:49.271865       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0603 10:44:49.305396       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0603 10:44:50.243916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="11.620773ms"
	I0603 10:44:50.244589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="27.428µs"
	W0603 10:44:56.105536       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:44:56.105636       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] <==
	I0603 10:39:57.159594       1 server_linux.go:69] "Using iptables proxy"
	I0603 10:39:57.218281       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.188"]
	I0603 10:39:57.342906       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 10:39:57.342963       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 10:39:57.342982       1 server_linux.go:165] "Using iptables Proxier"
	I0603 10:39:57.346598       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 10:39:57.346751       1 server.go:872] "Version info" version="v1.30.1"
	I0603 10:39:57.346782       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 10:39:57.348476       1 config.go:192] "Starting service config controller"
	I0603 10:39:57.348510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 10:39:57.348529       1 config.go:101] "Starting endpoint slice config controller"
	I0603 10:39:57.348533       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 10:39:57.348925       1 config.go:319] "Starting node config controller"
	I0603 10:39:57.348949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 10:39:57.449606       1 shared_informer.go:320] Caches are synced for node config
	I0603 10:39:57.449650       1 shared_informer.go:320] Caches are synced for service config
	I0603 10:39:57.449676       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] <==
	W0603 10:39:37.138101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 10:39:37.138131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 10:39:37.970105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 10:39:37.970216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 10:39:37.989275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 10:39:37.989368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 10:39:38.013775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 10:39:38.013860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 10:39:38.045268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 10:39:38.045425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 10:39:38.057787       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 10:39:38.057830       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 10:39:38.211494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 10:39:38.211539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 10:39:38.233824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 10:39:38.233848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 10:39:38.235427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 10:39:38.235464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 10:39:38.314538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 10:39:38.314656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 10:39:38.419256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 10:39:38.419305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 10:39:38.431515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 10:39:38.431560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 10:39:41.011852       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 10:44:46 addons-926744 kubelet[1270]: I0603 10:44:46.269295    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6ae728-2676-48ad-a8bb-c277fafb0fc5" containerName="csi-attacher"
	Jun 03 10:44:46 addons-926744 kubelet[1270]: I0603 10:44:46.269302    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e8ebfd-6ec0-46ce-9c28-04b41a1fb4be" containerName="task-pv-container"
	Jun 03 10:44:46 addons-926744 kubelet[1270]: I0603 10:44:46.360538    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl82f\" (UniqueName: \"kubernetes.io/projected/3832537b-81cc-4b24-a14b-af5ebcdbf83d-kube-api-access-hl82f\") pod \"hello-world-app-86c47465fc-ksqv6\" (UID: \"3832537b-81cc-4b24-a14b-af5ebcdbf83d\") " pod="default/hello-world-app-86c47465fc-ksqv6"
	Jun 03 10:44:46 addons-926744 kubelet[1270]: I0603 10:44:46.360896    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3832537b-81cc-4b24-a14b-af5ebcdbf83d-gcp-creds\") pod \"hello-world-app-86c47465fc-ksqv6\" (UID: \"3832537b-81cc-4b24-a14b-af5ebcdbf83d\") " pod="default/hello-world-app-86c47465fc-ksqv6"
	Jun 03 10:44:47 addons-926744 kubelet[1270]: I0603 10:44:47.469870    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbf7\" (UniqueName: \"kubernetes.io/projected/b2df4538-5f55-4952-9579-2cf3d39182c2-kube-api-access-jdbf7\") pod \"b2df4538-5f55-4952-9579-2cf3d39182c2\" (UID: \"b2df4538-5f55-4952-9579-2cf3d39182c2\") "
	Jun 03 10:44:47 addons-926744 kubelet[1270]: I0603 10:44:47.472295    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2df4538-5f55-4952-9579-2cf3d39182c2-kube-api-access-jdbf7" (OuterVolumeSpecName: "kube-api-access-jdbf7") pod "b2df4538-5f55-4952-9579-2cf3d39182c2" (UID: "b2df4538-5f55-4952-9579-2cf3d39182c2"). InnerVolumeSpecName "kube-api-access-jdbf7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 10:44:47 addons-926744 kubelet[1270]: I0603 10:44:47.570967    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jdbf7\" (UniqueName: \"kubernetes.io/projected/b2df4538-5f55-4952-9579-2cf3d39182c2-kube-api-access-jdbf7\") on node \"addons-926744\" DevicePath \"\""
	Jun 03 10:44:48 addons-926744 kubelet[1270]: I0603 10:44:48.207930    1270 scope.go:117] "RemoveContainer" containerID="f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260"
	Jun 03 10:44:48 addons-926744 kubelet[1270]: I0603 10:44:48.242604    1270 scope.go:117] "RemoveContainer" containerID="f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260"
	Jun 03 10:44:48 addons-926744 kubelet[1270]: E0603 10:44:48.243607    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260\": container with ID starting with f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260 not found: ID does not exist" containerID="f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260"
	Jun 03 10:44:48 addons-926744 kubelet[1270]: I0603 10:44:48.243635    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260"} err="failed to get container status \"f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260\": rpc error: code = NotFound desc = could not find container \"f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260\": container with ID starting with f901fcc3139ff7cff8654367307386af58cc38fc6ee152eec6adcbde51048260 not found: ID does not exist"
	Jun 03 10:44:49 addons-926744 kubelet[1270]: I0603 10:44:49.462958    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10576e58-28de-4816-8ecb-ee3277edc1c9" path="/var/lib/kubelet/pods/10576e58-28de-4816-8ecb-ee3277edc1c9/volumes"
	Jun 03 10:44:49 addons-926744 kubelet[1270]: I0603 10:44:49.463428    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1dc452-a980-4790-9ee7-7ab2ed6abd02" path="/var/lib/kubelet/pods/4c1dc452-a980-4790-9ee7-7ab2ed6abd02/volumes"
	Jun 03 10:44:49 addons-926744 kubelet[1270]: I0603 10:44:49.463788    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2df4538-5f55-4952-9579-2cf3d39182c2" path="/var/lib/kubelet/pods/b2df4538-5f55-4952-9579-2cf3d39182c2/volumes"
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.610283    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a014780-a43d-46a5-9cca-7929e5385a64-webhook-cert\") pod \"3a014780-a43d-46a5-9cca-7929e5385a64\" (UID: \"3a014780-a43d-46a5-9cca-7929e5385a64\") "
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.610319    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcmz5\" (UniqueName: \"kubernetes.io/projected/3a014780-a43d-46a5-9cca-7929e5385a64-kube-api-access-gcmz5\") pod \"3a014780-a43d-46a5-9cca-7929e5385a64\" (UID: \"3a014780-a43d-46a5-9cca-7929e5385a64\") "
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.613380    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a014780-a43d-46a5-9cca-7929e5385a64-kube-api-access-gcmz5" (OuterVolumeSpecName: "kube-api-access-gcmz5") pod "3a014780-a43d-46a5-9cca-7929e5385a64" (UID: "3a014780-a43d-46a5-9cca-7929e5385a64"). InnerVolumeSpecName "kube-api-access-gcmz5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.614323    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a014780-a43d-46a5-9cca-7929e5385a64-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3a014780-a43d-46a5-9cca-7929e5385a64" (UID: "3a014780-a43d-46a5-9cca-7929e5385a64"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.711111    1270 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a014780-a43d-46a5-9cca-7929e5385a64-webhook-cert\") on node \"addons-926744\" DevicePath \"\""
	Jun 03 10:44:52 addons-926744 kubelet[1270]: I0603 10:44:52.711143    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gcmz5\" (UniqueName: \"kubernetes.io/projected/3a014780-a43d-46a5-9cca-7929e5385a64-kube-api-access-gcmz5\") on node \"addons-926744\" DevicePath \"\""
	Jun 03 10:44:53 addons-926744 kubelet[1270]: I0603 10:44:53.236460    1270 scope.go:117] "RemoveContainer" containerID="35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e"
	Jun 03 10:44:53 addons-926744 kubelet[1270]: I0603 10:44:53.255947    1270 scope.go:117] "RemoveContainer" containerID="35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e"
	Jun 03 10:44:53 addons-926744 kubelet[1270]: E0603 10:44:53.256310    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e\": container with ID starting with 35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e not found: ID does not exist" containerID="35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e"
	Jun 03 10:44:53 addons-926744 kubelet[1270]: I0603 10:44:53.256401    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e"} err="failed to get container status \"35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e\": rpc error: code = NotFound desc = could not find container \"35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e\": container with ID starting with 35345e275f0ae7e745249f98d864fd521603e3515e80b3dec175f9e65303923e not found: ID does not exist"
	Jun 03 10:44:53 addons-926744 kubelet[1270]: I0603 10:44:53.462125    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a014780-a43d-46a5-9cca-7929e5385a64" path="/var/lib/kubelet/pods/3a014780-a43d-46a5-9cca-7929e5385a64/volumes"
	
	
	==> storage-provisioner [f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf] <==
	I0603 10:40:00.886486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 10:40:00.957891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 10:40:00.957949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 10:40:01.007501       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 10:40:01.010283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d!
	I0603 10:40:01.011519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4090779-c75e-41e7-abc0-7ccc633724ea", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d became leader
	I0603 10:40:01.110809       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-926744 -n addons-926744
helpers_test.go:261: (dbg) Run:  kubectl --context addons-926744 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.200095ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-gsd5w" [23f016d5-3265-4e2c-abb2-940fc0259aab] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005794357s
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (69.509271ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 2m30.119584746s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (99.692135ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 2m33.205271101s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (71.254984ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 2m39.065575752s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (75.296005ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 2m44.45006704s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (67.4656ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 2m57.836385081s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (66.417926ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 3m14.013527565s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (65.117362ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 3m41.855865137s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (58.2482ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 4m2.332538712s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (61.357ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 5m2.987136427s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (63.424611ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 5m45.168644626s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (63.429059ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 6m43.430309605s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926744 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926744 top pods -n kube-system: exit status 1 (63.167212ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x6wn8, age: 8m11.305293546s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-926744 -n addons-926744
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-926744 logs -n 25: (1.365727584s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-238243                                                                     | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-730853                                                                     | download-only-730853 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-238243                                                                     | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-373654 | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | binary-mirror-373654                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-373654                                                                     | binary-mirror-373654 | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC | 03 Jun 24 10:39 UTC |
	| addons  | enable dashboard -p                                                                         | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC |                     |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-926744 --wait=true                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:39 UTC | 03 Jun 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | -p addons-926744                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-926744 ip                                                                            | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | addons-926744                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-926744 ssh curl -s                                                                   | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | -p addons-926744                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-926744 ssh cat                                                                       | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | /opt/local-path-provisioner/pvc-c91d9397-ba00-4758-81d9-86e4e7e60cde_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:42 UTC | 03 Jun 24 10:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:43 UTC | 03 Jun 24 10:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-926744 addons                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:43 UTC | 03 Jun 24 10:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-926744 ip                                                                            | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-926744 addons disable                                                                | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:44 UTC | 03 Jun 24 10:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-926744 addons                                                                        | addons-926744        | jenkins | v1.33.1 | 03 Jun 24 10:48 UTC | 03 Jun 24 10:48 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:39:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:39:00.680880   15688 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:39:00.681090   15688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:39:00.681098   15688 out.go:304] Setting ErrFile to fd 2...
	I0603 10:39:00.681102   15688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:39:00.681270   15688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:39:00.681788   15688 out.go:298] Setting JSON to false
	I0603 10:39:00.682562   15688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1286,"bootTime":1717409855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:39:00.682614   15688 start.go:139] virtualization: kvm guest
	I0603 10:39:00.684530   15688 out.go:177] * [addons-926744] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:39:00.685815   15688 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:39:00.685810   15688 notify.go:220] Checking for updates...
	I0603 10:39:00.687177   15688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:39:00.688398   15688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:39:00.689627   15688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:00.691446   15688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:39:00.692818   15688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:39:00.694278   15688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:39:00.724066   15688 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 10:39:00.725240   15688 start.go:297] selected driver: kvm2
	I0603 10:39:00.725261   15688 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:39:00.725275   15688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:39:00.725948   15688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:39:00.726022   15688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:39:00.739965   15688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:39:00.740003   15688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:39:00.740180   15688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:39:00.740228   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:00.740239   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:00.740250   15688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 10:39:00.740291   15688 start.go:340] cluster config:
	{Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:39:00.740376   15688 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:39:00.742007   15688 out.go:177] * Starting "addons-926744" primary control-plane node in "addons-926744" cluster
	I0603 10:39:00.743216   15688 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:39:00.743243   15688 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 10:39:00.743250   15688 cache.go:56] Caching tarball of preloaded images
	I0603 10:39:00.743338   15688 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:39:00.743348   15688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:39:00.743606   15688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json ...
	I0603 10:39:00.743624   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json: {Name:mk9141239b37afe7f92d08173cacd42a85c219d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:00.743740   15688 start.go:360] acquireMachinesLock for addons-926744: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:39:00.743778   15688 start.go:364] duration metric: took 26.149µs to acquireMachinesLock for "addons-926744"
	I0603 10:39:00.743793   15688 start.go:93] Provisioning new machine with config: &{Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:39:00.743843   15688 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 10:39:00.745351   15688 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 10:39:00.745461   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:00.745501   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:00.758808   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I0603 10:39:00.759203   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:00.759710   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:00.759729   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:00.760030   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:00.760257   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:00.760393   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:00.760554   15688 start.go:159] libmachine.API.Create for "addons-926744" (driver="kvm2")
	I0603 10:39:00.760577   15688 client.go:168] LocalClient.Create starting
	I0603 10:39:00.760607   15688 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:39:00.930483   15688 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:39:01.301606   15688 main.go:141] libmachine: Running pre-create checks...
	I0603 10:39:01.301633   15688 main.go:141] libmachine: (addons-926744) Calling .PreCreateCheck
	I0603 10:39:01.302136   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:01.302543   15688 main.go:141] libmachine: Creating machine...
	I0603 10:39:01.302557   15688 main.go:141] libmachine: (addons-926744) Calling .Create
	I0603 10:39:01.302708   15688 main.go:141] libmachine: (addons-926744) Creating KVM machine...
	I0603 10:39:01.303852   15688 main.go:141] libmachine: (addons-926744) DBG | found existing default KVM network
	I0603 10:39:01.304537   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.304381   15710 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0603 10:39:01.304564   15688 main.go:141] libmachine: (addons-926744) DBG | created network xml: 
	I0603 10:39:01.304585   15688 main.go:141] libmachine: (addons-926744) DBG | <network>
	I0603 10:39:01.304594   15688 main.go:141] libmachine: (addons-926744) DBG |   <name>mk-addons-926744</name>
	I0603 10:39:01.304607   15688 main.go:141] libmachine: (addons-926744) DBG |   <dns enable='no'/>
	I0603 10:39:01.304617   15688 main.go:141] libmachine: (addons-926744) DBG |   
	I0603 10:39:01.304628   15688 main.go:141] libmachine: (addons-926744) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 10:39:01.304639   15688 main.go:141] libmachine: (addons-926744) DBG |     <dhcp>
	I0603 10:39:01.304691   15688 main.go:141] libmachine: (addons-926744) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 10:39:01.304717   15688 main.go:141] libmachine: (addons-926744) DBG |     </dhcp>
	I0603 10:39:01.304732   15688 main.go:141] libmachine: (addons-926744) DBG |   </ip>
	I0603 10:39:01.304747   15688 main.go:141] libmachine: (addons-926744) DBG |   
	I0603 10:39:01.304773   15688 main.go:141] libmachine: (addons-926744) DBG | </network>
	I0603 10:39:01.304795   15688 main.go:141] libmachine: (addons-926744) DBG | 
	I0603 10:39:01.309683   15688 main.go:141] libmachine: (addons-926744) DBG | trying to create private KVM network mk-addons-926744 192.168.39.0/24...
	I0603 10:39:01.370727   15688 main.go:141] libmachine: (addons-926744) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 ...
	I0603 10:39:01.370755   15688 main.go:141] libmachine: (addons-926744) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:39:01.370766   15688 main.go:141] libmachine: (addons-926744) DBG | private KVM network mk-addons-926744 192.168.39.0/24 created
	I0603 10:39:01.370784   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.370668   15710 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:01.370926   15688 main.go:141] libmachine: (addons-926744) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:39:01.615063   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.614922   15710 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa...
	I0603 10:39:01.689453   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.689334   15710 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/addons-926744.rawdisk...
	I0603 10:39:01.689484   15688 main.go:141] libmachine: (addons-926744) DBG | Writing magic tar header
	I0603 10:39:01.689522   15688 main.go:141] libmachine: (addons-926744) DBG | Writing SSH key tar header
	I0603 10:39:01.689544   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:01.689462   15710 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 ...
	I0603 10:39:01.689569   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744 (perms=drwx------)
	I0603 10:39:01.689578   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744
	I0603 10:39:01.689585   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:39:01.689592   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:39:01.689602   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:39:01.689607   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:39:01.689616   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:39:01.689620   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:39:01.689632   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:39:01.689641   15688 main.go:141] libmachine: (addons-926744) DBG | Checking permissions on dir: /home
	I0603 10:39:01.689654   15688 main.go:141] libmachine: (addons-926744) DBG | Skipping /home - not owner
	I0603 10:39:01.689666   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:39:01.689674   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:39:01.689679   15688 main.go:141] libmachine: (addons-926744) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:39:01.689686   15688 main.go:141] libmachine: (addons-926744) Creating domain...
	I0603 10:39:01.690709   15688 main.go:141] libmachine: (addons-926744) define libvirt domain using xml: 
	I0603 10:39:01.690723   15688 main.go:141] libmachine: (addons-926744) <domain type='kvm'>
	I0603 10:39:01.690729   15688 main.go:141] libmachine: (addons-926744)   <name>addons-926744</name>
	I0603 10:39:01.690735   15688 main.go:141] libmachine: (addons-926744)   <memory unit='MiB'>4000</memory>
	I0603 10:39:01.690745   15688 main.go:141] libmachine: (addons-926744)   <vcpu>2</vcpu>
	I0603 10:39:01.690756   15688 main.go:141] libmachine: (addons-926744)   <features>
	I0603 10:39:01.690769   15688 main.go:141] libmachine: (addons-926744)     <acpi/>
	I0603 10:39:01.690779   15688 main.go:141] libmachine: (addons-926744)     <apic/>
	I0603 10:39:01.690790   15688 main.go:141] libmachine: (addons-926744)     <pae/>
	I0603 10:39:01.690800   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.690812   15688 main.go:141] libmachine: (addons-926744)   </features>
	I0603 10:39:01.690823   15688 main.go:141] libmachine: (addons-926744)   <cpu mode='host-passthrough'>
	I0603 10:39:01.690847   15688 main.go:141] libmachine: (addons-926744)   
	I0603 10:39:01.690873   15688 main.go:141] libmachine: (addons-926744)   </cpu>
	I0603 10:39:01.690886   15688 main.go:141] libmachine: (addons-926744)   <os>
	I0603 10:39:01.690895   15688 main.go:141] libmachine: (addons-926744)     <type>hvm</type>
	I0603 10:39:01.690906   15688 main.go:141] libmachine: (addons-926744)     <boot dev='cdrom'/>
	I0603 10:39:01.690916   15688 main.go:141] libmachine: (addons-926744)     <boot dev='hd'/>
	I0603 10:39:01.690929   15688 main.go:141] libmachine: (addons-926744)     <bootmenu enable='no'/>
	I0603 10:39:01.690939   15688 main.go:141] libmachine: (addons-926744)   </os>
	I0603 10:39:01.690947   15688 main.go:141] libmachine: (addons-926744)   <devices>
	I0603 10:39:01.690955   15688 main.go:141] libmachine: (addons-926744)     <disk type='file' device='cdrom'>
	I0603 10:39:01.690968   15688 main.go:141] libmachine: (addons-926744)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/boot2docker.iso'/>
	I0603 10:39:01.690980   15688 main.go:141] libmachine: (addons-926744)       <target dev='hdc' bus='scsi'/>
	I0603 10:39:01.690991   15688 main.go:141] libmachine: (addons-926744)       <readonly/>
	I0603 10:39:01.691001   15688 main.go:141] libmachine: (addons-926744)     </disk>
	I0603 10:39:01.691027   15688 main.go:141] libmachine: (addons-926744)     <disk type='file' device='disk'>
	I0603 10:39:01.691068   15688 main.go:141] libmachine: (addons-926744)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:39:01.691088   15688 main.go:141] libmachine: (addons-926744)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/addons-926744.rawdisk'/>
	I0603 10:39:01.691105   15688 main.go:141] libmachine: (addons-926744)       <target dev='hda' bus='virtio'/>
	I0603 10:39:01.691119   15688 main.go:141] libmachine: (addons-926744)     </disk>
	I0603 10:39:01.691131   15688 main.go:141] libmachine: (addons-926744)     <interface type='network'>
	I0603 10:39:01.691145   15688 main.go:141] libmachine: (addons-926744)       <source network='mk-addons-926744'/>
	I0603 10:39:01.691156   15688 main.go:141] libmachine: (addons-926744)       <model type='virtio'/>
	I0603 10:39:01.691167   15688 main.go:141] libmachine: (addons-926744)     </interface>
	I0603 10:39:01.691178   15688 main.go:141] libmachine: (addons-926744)     <interface type='network'>
	I0603 10:39:01.691198   15688 main.go:141] libmachine: (addons-926744)       <source network='default'/>
	I0603 10:39:01.691213   15688 main.go:141] libmachine: (addons-926744)       <model type='virtio'/>
	I0603 10:39:01.691221   15688 main.go:141] libmachine: (addons-926744)     </interface>
	I0603 10:39:01.691226   15688 main.go:141] libmachine: (addons-926744)     <serial type='pty'>
	I0603 10:39:01.691234   15688 main.go:141] libmachine: (addons-926744)       <target port='0'/>
	I0603 10:39:01.691245   15688 main.go:141] libmachine: (addons-926744)     </serial>
	I0603 10:39:01.691255   15688 main.go:141] libmachine: (addons-926744)     <console type='pty'>
	I0603 10:39:01.691266   15688 main.go:141] libmachine: (addons-926744)       <target type='serial' port='0'/>
	I0603 10:39:01.691278   15688 main.go:141] libmachine: (addons-926744)     </console>
	I0603 10:39:01.691288   15688 main.go:141] libmachine: (addons-926744)     <rng model='virtio'>
	I0603 10:39:01.691302   15688 main.go:141] libmachine: (addons-926744)       <backend model='random'>/dev/random</backend>
	I0603 10:39:01.691315   15688 main.go:141] libmachine: (addons-926744)     </rng>
	I0603 10:39:01.691332   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.691348   15688 main.go:141] libmachine: (addons-926744)     
	I0603 10:39:01.691362   15688 main.go:141] libmachine: (addons-926744)   </devices>
	I0603 10:39:01.691373   15688 main.go:141] libmachine: (addons-926744) </domain>
	I0603 10:39:01.691387   15688 main.go:141] libmachine: (addons-926744) 
	I0603 10:39:01.696881   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:7a:35:b1 in network default
	I0603 10:39:01.697393   15688 main.go:141] libmachine: (addons-926744) Ensuring networks are active...
	I0603 10:39:01.697413   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:01.697999   15688 main.go:141] libmachine: (addons-926744) Ensuring network default is active
	I0603 10:39:01.698314   15688 main.go:141] libmachine: (addons-926744) Ensuring network mk-addons-926744 is active
	I0603 10:39:01.698718   15688 main.go:141] libmachine: (addons-926744) Getting domain xml...
	I0603 10:39:01.699324   15688 main.go:141] libmachine: (addons-926744) Creating domain...
	I0603 10:39:03.047775   15688 main.go:141] libmachine: (addons-926744) Waiting to get IP...
	I0603 10:39:03.048591   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.048982   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.049012   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.048933   15710 retry.go:31] will retry after 234.406372ms: waiting for machine to come up
	I0603 10:39:03.285437   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.285802   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.285830   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.285760   15710 retry.go:31] will retry after 368.775764ms: waiting for machine to come up
	I0603 10:39:03.656294   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.656800   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.656831   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.656749   15710 retry.go:31] will retry after 327.819161ms: waiting for machine to come up
	I0603 10:39:03.986447   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:03.986867   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:03.986904   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:03.986850   15710 retry.go:31] will retry after 516.803871ms: waiting for machine to come up
	I0603 10:39:04.505163   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:04.505606   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:04.505644   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:04.505577   15710 retry.go:31] will retry after 538.847196ms: waiting for machine to come up
	I0603 10:39:05.046513   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:05.046959   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:05.046978   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:05.046922   15710 retry.go:31] will retry after 794.327963ms: waiting for machine to come up
	I0603 10:39:05.842621   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:05.843055   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:05.843226   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:05.843016   15710 retry.go:31] will retry after 789.369654ms: waiting for machine to come up
	I0603 10:39:06.634041   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:06.634422   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:06.634449   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:06.634390   15710 retry.go:31] will retry after 1.140360619s: waiting for machine to come up
	I0603 10:39:07.776668   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:07.777069   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:07.777100   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:07.777000   15710 retry.go:31] will retry after 1.192415957s: waiting for machine to come up
	I0603 10:39:08.971405   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:08.971747   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:08.971780   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:08.971725   15710 retry.go:31] will retry after 2.110243957s: waiting for machine to come up
	I0603 10:39:11.083591   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:11.083990   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:11.084020   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:11.083958   15710 retry.go:31] will retry after 2.197882657s: waiting for machine to come up
	I0603 10:39:13.284444   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:13.284919   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:13.284947   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:13.284869   15710 retry.go:31] will retry after 3.328032381s: waiting for machine to come up
	I0603 10:39:16.614700   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:16.615094   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find current IP address of domain addons-926744 in network mk-addons-926744
	I0603 10:39:16.615116   15688 main.go:141] libmachine: (addons-926744) DBG | I0603 10:39:16.615075   15710 retry.go:31] will retry after 4.426262831s: waiting for machine to come up
	I0603 10:39:21.042222   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.042761   15688 main.go:141] libmachine: (addons-926744) Found IP for machine: 192.168.39.188
	I0603 10:39:21.042779   15688 main.go:141] libmachine: (addons-926744) Reserving static IP address...
	I0603 10:39:21.042788   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has current primary IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.043326   15688 main.go:141] libmachine: (addons-926744) DBG | unable to find host DHCP lease matching {name: "addons-926744", mac: "52:54:00:ef:0f:40", ip: "192.168.39.188"} in network mk-addons-926744
	I0603 10:39:21.109987   15688 main.go:141] libmachine: (addons-926744) DBG | Getting to WaitForSSH function...
	I0603 10:39:21.110019   15688 main.go:141] libmachine: (addons-926744) Reserved static IP address: 192.168.39.188
	I0603 10:39:21.110043   15688 main.go:141] libmachine: (addons-926744) Waiting for SSH to be available...
	I0603 10:39:21.112366   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.112809   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.112844   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.113103   15688 main.go:141] libmachine: (addons-926744) DBG | Using SSH client type: external
	I0603 10:39:21.113126   15688 main.go:141] libmachine: (addons-926744) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa (-rw-------)
	I0603 10:39:21.113154   15688 main.go:141] libmachine: (addons-926744) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:39:21.113171   15688 main.go:141] libmachine: (addons-926744) DBG | About to run SSH command:
	I0603 10:39:21.113200   15688 main.go:141] libmachine: (addons-926744) DBG | exit 0
	I0603 10:39:21.242959   15688 main.go:141] libmachine: (addons-926744) DBG | SSH cmd err, output: <nil>: 
	I0603 10:39:21.243259   15688 main.go:141] libmachine: (addons-926744) KVM machine creation complete!
	I0603 10:39:21.243542   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:21.244049   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:21.244263   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:21.244427   15688 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:39:21.244438   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:21.245663   15688 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:39:21.245674   15688 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:39:21.245680   15688 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:39:21.245688   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.247729   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.248018   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.248048   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.248218   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.248379   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.248536   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.248654   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.248809   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.249030   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.249046   15688 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:39:21.354004   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:39:21.354031   15688 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:39:21.354041   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.356660   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.357019   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.357038   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.357190   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.357377   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.357519   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.357678   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.357821   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.357982   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.357998   15688 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:39:21.467553   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:39:21.467624   15688 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:39:21.467634   15688 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:39:21.467644   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.467884   15688 buildroot.go:166] provisioning hostname "addons-926744"
	I0603 10:39:21.467914   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.468067   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.470868   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.471261   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.471289   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.471392   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.471547   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.471689   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.471828   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.472139   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.472301   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.472313   15688 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-926744 && echo "addons-926744" | sudo tee /etc/hostname
	I0603 10:39:21.599198   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926744
	
	I0603 10:39:21.599239   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.601905   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.602278   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.602298   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.602506   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:21.602710   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.602879   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:21.603048   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:21.603212   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:21.603445   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:21.603470   15688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-926744' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-926744/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-926744' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:39:21.725573   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:39:21.725597   15688 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:39:21.725637   15688 buildroot.go:174] setting up certificates
	I0603 10:39:21.725654   15688 provision.go:84] configureAuth start
	I0603 10:39:21.725672   15688 main.go:141] libmachine: (addons-926744) Calling .GetMachineName
	I0603 10:39:21.725914   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:21.728329   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.728687   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.728716   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.728800   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:21.730953   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.731239   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:21.731268   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:21.731370   15688 provision.go:143] copyHostCerts
	I0603 10:39:21.731449   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:39:21.731560   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:39:21.731636   15688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:39:21.731695   15688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.addons-926744 san=[127.0.0.1 192.168.39.188 addons-926744 localhost minikube]
	I0603 10:39:22.097550   15688 provision.go:177] copyRemoteCerts
	I0603 10:39:22.097612   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:39:22.097643   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.101431   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.101796   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.101825   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.101952   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.102210   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.102350   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.102549   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.187551   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:39:22.210910   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:39:22.233749   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 10:39:22.256094   15688 provision.go:87] duration metric: took 530.42487ms to configureAuth
	I0603 10:39:22.256116   15688 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:39:22.256278   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:22.256344   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.259055   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.259485   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.259513   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.259672   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.259874   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.260041   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.260241   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.260422   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:22.260595   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:22.260611   15688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:39:22.523377   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:39:22.523406   15688 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:39:22.523416   15688 main.go:141] libmachine: (addons-926744) Calling .GetURL
	I0603 10:39:22.524650   15688 main.go:141] libmachine: (addons-926744) DBG | Using libvirt version 6000000
	I0603 10:39:22.527101   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.527501   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.527523   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.527705   15688 main.go:141] libmachine: Docker is up and running!
	I0603 10:39:22.527719   15688 main.go:141] libmachine: Reticulating splines...
	I0603 10:39:22.527725   15688 client.go:171] duration metric: took 21.767140775s to LocalClient.Create
	I0603 10:39:22.527743   15688 start.go:167] duration metric: took 21.767190617s to libmachine.API.Create "addons-926744"
	I0603 10:39:22.527753   15688 start.go:293] postStartSetup for "addons-926744" (driver="kvm2")
	I0603 10:39:22.527761   15688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:39:22.527776   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.527996   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:39:22.528020   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.530310   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.530683   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.530702   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.530829   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.531004   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.531190   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.531336   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.612774   15688 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:39:22.616746   15688 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:39:22.616768   15688 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:39:22.616830   15688 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:39:22.616864   15688 start.go:296] duration metric: took 89.105826ms for postStartSetup
	I0603 10:39:22.616902   15688 main.go:141] libmachine: (addons-926744) Calling .GetConfigRaw
	I0603 10:39:22.617395   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:22.620127   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.620475   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.620504   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.620740   15688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/config.json ...
	I0603 10:39:22.620893   15688 start.go:128] duration metric: took 21.877040801s to createHost
	I0603 10:39:22.620914   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.622879   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.623185   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.623214   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.623315   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.623489   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.623632   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.623749   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.623881   15688 main.go:141] libmachine: Using SSH client type: native
	I0603 10:39:22.624088   15688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0603 10:39:22.624103   15688 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:39:22.735554   15688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717411162.708587879
	
	I0603 10:39:22.735574   15688 fix.go:216] guest clock: 1717411162.708587879
	I0603 10:39:22.735581   15688 fix.go:229] Guest: 2024-06-03 10:39:22.708587879 +0000 UTC Remote: 2024-06-03 10:39:22.620903621 +0000 UTC m=+21.971514084 (delta=87.684258ms)
	I0603 10:39:22.735612   15688 fix.go:200] guest clock delta is within tolerance: 87.684258ms
	I0603 10:39:22.735617   15688 start.go:83] releasing machines lock for "addons-926744", held for 21.991830654s
	I0603 10:39:22.735640   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.735892   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:22.738244   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.738492   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.738519   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.738657   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739074   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739222   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:22.739337   15688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:39:22.739389   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.739447   15688 ssh_runner.go:195] Run: cat /version.json
	I0603 10:39:22.739469   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:22.741962   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742092   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742332   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.742356   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742492   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.742598   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:22.742623   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:22.742667   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.742817   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:22.742826   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.742988   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:22.742996   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.743120   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:22.743234   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:22.849068   15688 ssh_runner.go:195] Run: systemctl --version
	I0603 10:39:22.855018   15688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:39:23.012170   15688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:39:23.018709   15688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:39:23.018758   15688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:39:23.034253   15688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:39:23.034271   15688 start.go:494] detecting cgroup driver to use...
	I0603 10:39:23.034321   15688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:39:23.050406   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:39:23.062935   15688 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:39:23.062972   15688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:39:23.075521   15688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:39:23.088043   15688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:39:23.197029   15688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:39:23.344391   15688 docker.go:233] disabling docker service ...
	I0603 10:39:23.344453   15688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:39:23.359448   15688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:39:23.371431   15688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:39:23.510818   15688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:39:23.635139   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:39:23.648933   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:39:23.666622   15688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:39:23.666672   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.676952   15688 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:39:23.676996   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.686714   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.696483   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.706367   15688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:39:23.716174   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.726218   15688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.742841   15688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:39:23.752762   15688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:39:23.761523   15688 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:39:23.761566   15688 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:39:23.774573   15688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:39:23.783462   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:23.899169   15688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:39:24.033431   15688 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:39:24.033510   15688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:39:24.038630   15688 start.go:562] Will wait 60s for crictl version
	I0603 10:39:24.038688   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:39:24.042629   15688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:39:24.083375   15688 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:39:24.083470   15688 ssh_runner.go:195] Run: crio --version
	I0603 10:39:24.111320   15688 ssh_runner.go:195] Run: crio --version
	I0603 10:39:24.141167   15688 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:39:24.142262   15688 main.go:141] libmachine: (addons-926744) Calling .GetIP
	I0603 10:39:24.144907   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:24.145228   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:24.145256   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:24.145431   15688 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:39:24.149666   15688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:39:24.162397   15688 kubeadm.go:877] updating cluster {Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 10:39:24.162528   15688 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:39:24.162578   15688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:39:24.195782   15688 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 10:39:24.195851   15688 ssh_runner.go:195] Run: which lz4
	I0603 10:39:24.199796   15688 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 10:39:24.203991   15688 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 10:39:24.204014   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 10:39:25.473804   15688 crio.go:462] duration metric: took 1.274045772s to copy over tarball
	I0603 10:39:25.473876   15688 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 10:39:27.732622   15688 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258712261s)
	I0603 10:39:27.732659   15688 crio.go:469] duration metric: took 2.258825539s to extract the tarball
	I0603 10:39:27.732670   15688 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 10:39:27.770169   15688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:39:27.818097   15688 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 10:39:27.818123   15688 cache_images.go:84] Images are preloaded, skipping loading
	I0603 10:39:27.818133   15688 kubeadm.go:928] updating node { 192.168.39.188 8443 v1.30.1 crio true true} ...
	I0603 10:39:27.818241   15688 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-926744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:39:27.818315   15688 ssh_runner.go:195] Run: crio config
	I0603 10:39:27.859809   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:27.859828   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:27.859837   15688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 10:39:27.859858   15688 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-926744 NodeName:addons-926744 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 10:39:27.859975   15688 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-926744"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 10:39:27.860029   15688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:39:27.870310   15688 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 10:39:27.870387   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 10:39:27.879945   15688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 10:39:27.895930   15688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:39:27.911575   15688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 10:39:27.927182   15688 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I0603 10:39:27.930892   15688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:39:27.942855   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:28.049910   15688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:39:28.066146   15688 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744 for IP: 192.168.39.188
	I0603 10:39:28.066167   15688 certs.go:194] generating shared ca certs ...
	I0603 10:39:28.066179   15688 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.066327   15688 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:39:28.307328   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt ...
	I0603 10:39:28.307353   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt: {Name:mk984ed7a059f1be0c7e39f38d2e6183de9bbdff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.307510   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key ...
	I0603 10:39:28.307520   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key: {Name:mk82c7e7b22a8dabc509ee5632c503ace457f1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.307594   15688 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:39:28.423209   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt ...
	I0603 10:39:28.423237   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt: {Name:mka881b38c9e88d6c084321a1bfb3b4e4074f25f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.423393   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key ...
	I0603 10:39:28.423405   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key: {Name:mk349452fbb1bf63c9303e0d2bae66707b31ec88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.423470   15688 certs.go:256] generating profile certs ...
	I0603 10:39:28.423517   15688 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key
	I0603 10:39:28.423531   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt with IP's: []
	I0603 10:39:28.686409   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt ...
	I0603 10:39:28.686435   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: {Name:mke7d608cc02f6475b5fad9c4d3da0b5cbfee0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.686576   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key ...
	I0603 10:39:28.686586   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.key: {Name:mk5be8a532a5d7bb239b3a45c6c370a2517cd8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.686647   15688 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57
	I0603 10:39:28.686663   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.188]
	I0603 10:39:28.892305   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 ...
	I0603 10:39:28.892337   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57: {Name:mk23ceeaaa209592cdc8986d5b781decf2eb3719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.892523   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57 ...
	I0603 10:39:28.892541   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57: {Name:mkf7feeae242468b71e875a4f34e0d9e741c0102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.892637   15688 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt.ced1ab57 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt
	I0603 10:39:28.892727   15688 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key.ced1ab57 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key
	I0603 10:39:28.892777   15688 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key
	I0603 10:39:28.892792   15688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt with IP's: []
	I0603 10:39:28.991268   15688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt ...
	I0603 10:39:28.991295   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt: {Name:mk91b1b04c07e0abd5edeb22741cb687164322a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.991465   15688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key ...
	I0603 10:39:28.991477   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key: {Name:mk35e81d34e79efaa7ae4abefa0f7bbf60b8ccf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:28.991675   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:39:28.991707   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:39:28.991730   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:39:28.991759   15688 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:39:28.992270   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:39:29.040169   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:39:29.067982   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:39:29.090567   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:39:29.113082   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 10:39:29.135913   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 10:39:29.158512   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:39:29.181346   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 10:39:29.203449   15688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:39:29.225764   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 10:39:29.241671   15688 ssh_runner.go:195] Run: openssl version
	I0603 10:39:29.247206   15688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:39:29.257858   15688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.262385   15688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.262434   15688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:39:29.268350   15688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:39:29.279498   15688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:39:29.283652   15688 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:39:29.283707   15688 kubeadm.go:391] StartCluster: {Name:addons-926744 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-926744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:39:29.283785   15688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 10:39:29.283823   15688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 10:39:29.318028   15688 cri.go:89] found id: ""
	I0603 10:39:29.318101   15688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 10:39:29.328068   15688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 10:39:29.337625   15688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 10:39:29.347197   15688 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 10:39:29.347215   15688 kubeadm.go:156] found existing configuration files:
	
	I0603 10:39:29.347247   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 10:39:29.356339   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 10:39:29.356378   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 10:39:29.365771   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 10:39:29.374519   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 10:39:29.374566   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 10:39:29.383968   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 10:39:29.393211   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 10:39:29.393253   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 10:39:29.402717   15688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 10:39:29.411721   15688 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 10:39:29.411754   15688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 10:39:29.421177   15688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 10:39:29.476425   15688 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 10:39:29.476503   15688 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 10:39:29.597177   15688 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 10:39:29.597267   15688 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 10:39:29.597417   15688 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 10:39:29.806890   15688 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 10:39:30.066194   15688 out.go:204]   - Generating certificates and keys ...
	I0603 10:39:30.066354   15688 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 10:39:30.066445   15688 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 10:39:30.066546   15688 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 10:39:30.255981   15688 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 10:39:30.525682   15688 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 10:39:30.648978   15688 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 10:39:31.113794   15688 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 10:39:31.113969   15688 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-926744 localhost] and IPs [192.168.39.188 127.0.0.1 ::1]
	I0603 10:39:31.502754   15688 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 10:39:31.502942   15688 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-926744 localhost] and IPs [192.168.39.188 127.0.0.1 ::1]
	I0603 10:39:31.743899   15688 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 10:39:32.091205   15688 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 10:39:32.449506   15688 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 10:39:32.449754   15688 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 10:39:32.606839   15688 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 10:39:32.728286   15688 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 10:39:32.875918   15688 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 10:39:33.068539   15688 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 10:39:33.126307   15688 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 10:39:33.126978   15688 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 10:39:33.129307   15688 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 10:39:33.130943   15688 out.go:204]   - Booting up control plane ...
	I0603 10:39:33.131074   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 10:39:33.131194   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 10:39:33.132324   15688 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 10:39:33.150895   15688 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 10:39:33.151824   15688 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 10:39:33.151953   15688 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 10:39:33.276217   15688 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 10:39:33.276320   15688 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 10:39:33.777537   15688 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.486512ms
	I0603 10:39:33.777645   15688 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 10:39:38.776396   15688 kubeadm.go:309] [api-check] The API server is healthy after 5.002184862s
	I0603 10:39:38.789344   15688 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 10:39:38.802566   15688 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 10:39:38.826419   15688 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 10:39:38.826699   15688 kubeadm.go:309] [mark-control-plane] Marking the node addons-926744 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 10:39:38.836893   15688 kubeadm.go:309] [bootstrap-token] Using token: 9hbrg0.lmmhr5ylciaequvw
	I0603 10:39:38.838140   15688 out.go:204]   - Configuring RBAC rules ...
	I0603 10:39:38.838229   15688 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 10:39:38.841539   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 10:39:38.850513   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 10:39:38.853597   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 10:39:38.856557   15688 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 10:39:38.859328   15688 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 10:39:39.185416   15688 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 10:39:39.617733   15688 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 10:39:40.185892   15688 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 10:39:40.185931   15688 kubeadm.go:309] 
	I0603 10:39:40.186031   15688 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 10:39:40.186046   15688 kubeadm.go:309] 
	I0603 10:39:40.186262   15688 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 10:39:40.186279   15688 kubeadm.go:309] 
	I0603 10:39:40.186311   15688 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 10:39:40.186360   15688 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 10:39:40.186416   15688 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 10:39:40.186425   15688 kubeadm.go:309] 
	I0603 10:39:40.186502   15688 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 10:39:40.186511   15688 kubeadm.go:309] 
	I0603 10:39:40.186549   15688 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 10:39:40.186567   15688 kubeadm.go:309] 
	I0603 10:39:40.186648   15688 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 10:39:40.186745   15688 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 10:39:40.186844   15688 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 10:39:40.186855   15688 kubeadm.go:309] 
	I0603 10:39:40.186981   15688 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 10:39:40.187110   15688 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 10:39:40.187129   15688 kubeadm.go:309] 
	I0603 10:39:40.187243   15688 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9hbrg0.lmmhr5ylciaequvw \
	I0603 10:39:40.187381   15688 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 10:39:40.187403   15688 kubeadm.go:309] 	--control-plane 
	I0603 10:39:40.187407   15688 kubeadm.go:309] 
	I0603 10:39:40.187487   15688 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 10:39:40.187494   15688 kubeadm.go:309] 
	I0603 10:39:40.187590   15688 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9hbrg0.lmmhr5ylciaequvw \
	I0603 10:39:40.187731   15688 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 10:39:40.188165   15688 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 10:39:40.188195   15688 cni.go:84] Creating CNI manager for ""
	I0603 10:39:40.188206   15688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:39:40.189790   15688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 10:39:40.190941   15688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 10:39:40.201559   15688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 10:39:40.221034   15688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 10:39:40.221129   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:40.221136   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-926744 minikube.k8s.io/updated_at=2024_06_03T10_39_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=addons-926744 minikube.k8s.io/primary=true
	I0603 10:39:40.360045   15688 ops.go:34] apiserver oom_adj: -16
	I0603 10:39:40.360133   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:40.860813   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:41.360423   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:41.860916   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:42.360194   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:42.860438   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:43.360500   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:43.860260   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:44.360675   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:44.860710   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:45.360191   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:45.860896   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:46.361031   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:46.860193   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:47.360946   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:47.860306   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:48.360590   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:48.860941   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:49.360407   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:49.860893   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:50.360464   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:50.860825   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:51.360932   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:51.861021   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.360313   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.860689   15688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:39:52.945051   15688 kubeadm.go:1107] duration metric: took 12.723998121s to wait for elevateKubeSystemPrivileges
	W0603 10:39:52.945090   15688 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 10:39:52.945097   15688 kubeadm.go:393] duration metric: took 23.661395353s to StartCluster
	I0603 10:39:52.945113   15688 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:52.945246   15688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:39:52.945592   15688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:39:52.945785   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 10:39:52.945808   15688 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:39:52.947615   15688 out.go:177] * Verifying Kubernetes components...
	I0603 10:39:52.945867   15688 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 10:39:52.946064   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:52.948920   15688 addons.go:69] Setting cloud-spanner=true in profile "addons-926744"
	I0603 10:39:52.948934   15688 addons.go:69] Setting helm-tiller=true in profile "addons-926744"
	I0603 10:39:52.948939   15688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:39:52.948948   15688 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-926744"
	I0603 10:39:52.948953   15688 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-926744"
	I0603 10:39:52.948965   15688 addons.go:234] Setting addon helm-tiller=true in "addons-926744"
	I0603 10:39:52.948976   15688 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-926744"
	I0603 10:39:52.948982   15688 addons.go:69] Setting registry=true in profile "addons-926744"
	I0603 10:39:52.948977   15688 addons.go:69] Setting default-storageclass=true in profile "addons-926744"
	I0603 10:39:52.948934   15688 addons.go:69] Setting gcp-auth=true in profile "addons-926744"
	I0603 10:39:52.949005   15688 addons.go:69] Setting ingress=true in profile "addons-926744"
	I0603 10:39:52.949010   15688 addons.go:69] Setting storage-provisioner=true in profile "addons-926744"
	I0603 10:39:52.949012   15688 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-926744"
	I0603 10:39:52.949012   15688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-926744"
	I0603 10:39:52.949021   15688 addons.go:69] Setting inspektor-gadget=true in profile "addons-926744"
	I0603 10:39:52.949074   15688 addons.go:234] Setting addon inspektor-gadget=true in "addons-926744"
	I0603 10:39:52.949097   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949011   15688 addons.go:69] Setting volumesnapshots=true in profile "addons-926744"
	I0603 10:39:52.949172   15688 addons.go:234] Setting addon volumesnapshots=true in "addons-926744"
	I0603 10:39:52.949193   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.948966   15688 addons.go:234] Setting addon cloud-spanner=true in "addons-926744"
	I0603 10:39:52.949297   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949006   15688 addons.go:69] Setting volcano=true in profile "addons-926744"
	I0603 10:39:52.949379   15688 addons.go:234] Setting addon volcano=true in "addons-926744"
	I0603 10:39:52.949419   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949022   15688 addons.go:234] Setting addon ingress=true in "addons-926744"
	I0603 10:39:52.949470   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949476   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949507   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949527   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949534   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949548   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949565   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.948973   15688 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-926744"
	I0603 10:39:52.949731   15688 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-926744"
	I0603 10:39:52.949806   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949838   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949852   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.949877   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949925   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.948928   15688 addons.go:69] Setting yakd=true in profile "addons-926744"
	I0603 10:39:52.949954   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949980   15688 addons.go:234] Setting addon yakd=true in "addons-926744"
	I0603 10:39:52.948999   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.948999   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.949005   15688 addons.go:234] Setting addon registry=true in "addons-926744"
	I0603 10:39:52.950058   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950091   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950124   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949027   15688 addons.go:69] Setting metrics-server=true in profile "addons-926744"
	I0603 10:39:52.950351   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950368   15688 addons.go:234] Setting addon metrics-server=true in "addons-926744"
	I0603 10:39:52.950372   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949034   15688 addons.go:69] Setting ingress-dns=true in profile "addons-926744"
	I0603 10:39:52.950392   15688 addons.go:234] Setting addon ingress-dns=true in "addons-926744"
	I0603 10:39:52.949030   15688 mustload.go:65] Loading cluster: addons-926744
	I0603 10:39:52.949044   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950508   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950583   15688 config.go:182] Loaded profile config "addons-926744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:39:52.950609   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950628   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950672   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950706   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950721   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.949036   15688 addons.go:234] Setting addon storage-provisioner=true in "addons-926744"
	I0603 10:39:52.950819   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950840   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950884   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.950912   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.950929   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.950990   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.951033   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.951161   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.970563   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0603 10:39:52.970639   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0603 10:39:52.970999   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.971114   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.971640   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.971662   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.971790   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.971805   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.972098   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.972153   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.972372   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:52.972438   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0603 10:39:52.972745   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.972764   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.972778   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.973208   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.973228   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.973529   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.974098   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.974138   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.974366   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39493
	I0603 10:39:52.976847   15688 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-926744"
	I0603 10:39:52.976890   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:52.977273   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.977300   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.979512   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.979547   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.979807   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.979824   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.980430   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.980463   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.981875   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0603 10:39:52.982330   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.983218   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.983241   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.983669   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.984025   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.984510   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.984534   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.985046   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.985085   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.992533   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.993131   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.993169   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:52.993422   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0603 10:39:52.993863   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.995565   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0603 10:39:52.995773   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.995785   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.996166   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:52.996645   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:52.996662   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:52.997009   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.997221   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:52.998575   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:52.999596   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:52.999644   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.001119   15688 addons.go:234] Setting addon default-storageclass=true in "addons-926744"
	I0603 10:39:53.001160   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:53.001516   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.001547   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.001717   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0603 10:39:53.002104   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.002628   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.002646   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.003080   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.003641   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.003675   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.008851   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0603 10:39:53.009301   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.009828   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.009845   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.010204   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.010400   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.012150   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.014624   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 10:39:53.016068   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 10:39:53.016085   15688 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 10:39:53.016116   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.020339   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.020725   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.021026   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.020970   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.021205   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.021342   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.021511   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.021837   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0603 10:39:53.022185   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.022796   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.022814   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.023291   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.023918   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.023958   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.024205   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0603 10:39:53.024652   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.025198   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.025217   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.025282   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I0603 10:39:53.025616   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.026246   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.026288   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.030426   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.031073   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.031102   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.032416   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0603 10:39:53.032850   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.032924   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0603 10:39:53.033393   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.033410   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.033522   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.033875   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.033994   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0603 10:39:53.034019   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.034069   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.034614   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.034654   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.034861   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.035021   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.035056   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.035444   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.035464   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.035652   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.036124   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.036166   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.036198   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.037213   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.039232   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:39:53.038052   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.038082   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0603 10:39:53.039729   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0603 10:39:53.041948   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 10:39:53.040652   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.040000   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0603 10:39:53.039794   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0603 10:39:53.041067   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.044452   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:39:53.045836   15688 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 10:39:53.045853   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 10:39:53.045870   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.043775   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.043846   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.044738   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.045984   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.046671   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.046689   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.047093   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.047637   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.047673   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.047950   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.047964   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.048026   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.048091   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.048530   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.048546   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.048607   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.048645   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.048825   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.049308   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.049350   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.049553   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.049575   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.049601   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.049991   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.050169   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.050356   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.050564   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.051172   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.052852   15688 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0603 10:39:53.053974   15688 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 10:39:53.053990   15688 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 10:39:53.054008   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.052746   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.052965   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0603 10:39:53.055290   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.055792   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.055807   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.056165   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.056334   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.056927   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0603 10:39:53.057072   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0603 10:39:53.057362   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:39:53.057724   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.057756   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.057844   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.058240   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.058696   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.058845   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.058856   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.058910   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.058927   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.060316   15688 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 10:39:53.059222   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.059270   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.059313   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.062075   15688 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 10:39:53.062093   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 10:39:53.062110   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.062161   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.062326   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.062455   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.062469   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.062517   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.063125   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.063692   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I0603 10:39:53.063936   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.064106   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.064515   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.064827   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:39:53.064839   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:39:53.066768   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.066777   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.066829   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:39:53.066851   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:39:53.066861   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:39:53.066876   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:39:53.066887   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:39:53.067269   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:39:53.067304   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:39:53.067313   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 10:39:53.067403   15688 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0603 10:39:53.068979   15688 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 10:39:53.068987   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.070227   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 10:39:53.068473   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.068357   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.070264   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.070291   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.069143   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.070329   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.069875   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0603 10:39:53.070245   15688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 10:39:53.070384   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.070575   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.070719   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.071073   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0603 10:39:53.071500   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.072068   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.072083   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.072462   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.072641   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.073007   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.073589   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.073609   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.073774   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.074030   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.074053   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.074214   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.074373   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.074562   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.074695   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.075425   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.075490   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.077383   15688 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 10:39:53.075843   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.076295   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0603 10:39:53.077059   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0603 10:39:53.078780   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.078799   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 10:39:53.078814   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 10:39:53.078831   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.078894   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0603 10:39:53.079256   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079349   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079408   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.079849   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.079866   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.079987   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.080001   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.080315   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.080502   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.080542   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.080630   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.080648   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.080994   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.081177   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.081518   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.081842   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.083148   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.085228   15688 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 10:39:53.084277   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.084619   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.085021   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:39:53.085057   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.085742   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.086311   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:39:53.086352   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.086367   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.086379   15688 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 10:39:53.086393   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 10:39:53.086406   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.086616   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.087918   15688 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 10:39:53.086954   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.087116   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0603 10:39:53.088326   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0603 10:39:53.089157   15688 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 10:39:53.089169   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 10:39:53.089186   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.089210   15688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 10:39:53.089364   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.089479   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.090576   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.090606   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.090233   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.090258   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.090305   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.090420   15688 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:39:53.090737   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 10:39:53.090759   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.090957   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.091174   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.091193   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.091242   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.091846   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.091943   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.091957   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.092298   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.092499   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.093097   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.093408   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.094481   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.094886   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.094912   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.095141   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.095313   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.095455   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.095662   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.095750   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.097526   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 10:39:53.095863   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.096678   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.096729   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0603 10:39:53.097275   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.098697   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.099128   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.099837   15688 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 10:39:53.100810   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.100798   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 10:39:53.099979   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.101241   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.101942   15688 out.go:177]   - Using image docker.io/busybox:stable
	I0603 10:39:53.103239   15688 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 10:39:53.103257   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 10:39:53.103272   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.101957   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.102144   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.103583   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.104164   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
	I0603 10:39:53.104716   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 10:39:53.104837   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.104983   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.105289   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.106056   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 10:39:53.106377   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.106882   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.107472   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.107514   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 10:39:53.107277   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.107582   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.108447   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.108562   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.109179   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.109186   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 10:39:53.109373   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.109469   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.110486   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 10:39:53.110647   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.111765   15688 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 10:39:53.113194   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 10:39:53.113212   15688 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 10:39:53.113229   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.111944   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.113354   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.113365   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0603 10:39:53.113786   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0603 10:39:53.115002   15688 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 10:39:53.116620   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 10:39:53.116636   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 10:39:53.116653   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.117939   15688 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 10:39:53.116005   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.116055   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:39:53.116535   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.117073   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.120316   15688 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 10:39:53.119253   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.119363   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.119726   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.119749   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.120214   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:39:53.120468   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.121506   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.121527   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.121534   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.121544   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.121549   15688 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 10:39:53.121560   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:39:53.121563   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 10:39:53.121576   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.121608   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.121715   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.121788   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.121831   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.121851   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.121954   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.122181   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:39:53.122198   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:39:53.122330   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	W0603 10:39:53.123247   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34510->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.123275   15688 retry.go:31] will retry after 321.308159ms: ssh: handshake failed: read tcp 192.168.39.1:34510->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.123863   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:39:53.124154   15688 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 10:39:53.124167   15688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 10:39:53.124414   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:39:53.125074   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.125433   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.125453   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.125583   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.125767   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.125932   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.126108   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:39:53.126814   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.127151   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:39:53.127177   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:39:53.127248   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:39:53.127421   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:39:53.127576   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:39:53.127707   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	W0603 10:39:53.128663   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34524->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.128685   15688 retry.go:31] will retry after 217.648399ms: ssh: handshake failed: read tcp 192.168.39.1:34524->192.168.39.188:22: read: connection reset by peer
	W0603 10:39:53.128736   15688 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34536->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.128756   15688 retry.go:31] will retry after 284.924422ms: ssh: handshake failed: read tcp 192.168.39.1:34536->192.168.39.188:22: read: connection reset by peer
	I0603 10:39:53.355149   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 10:39:53.391925   15688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:39:53.391944   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 10:39:53.459021   15688 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 10:39:53.459069   15688 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 10:39:53.484893   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 10:39:53.512706   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 10:39:53.532991   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 10:39:53.533011   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 10:39:53.560989   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:39:53.596754   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 10:39:53.605998   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 10:39:53.606026   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 10:39:53.607123   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 10:39:53.607145   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 10:39:53.621083   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 10:39:53.621099   15688 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 10:39:53.647250   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 10:39:53.720437   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 10:39:53.720460   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 10:39:53.807224   15688 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 10:39:53.807250   15688 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 10:39:53.809821   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 10:39:53.809842   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 10:39:53.835979   15688 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 10:39:53.836004   15688 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 10:39:53.838441   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 10:39:53.838458   15688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 10:39:53.866695   15688 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 10:39:53.866724   15688 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 10:39:53.966602   15688 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 10:39:53.966623   15688 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 10:39:53.986392   15688 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 10:39:53.986411   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 10:39:54.015572   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 10:39:54.056850   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 10:39:54.056877   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 10:39:54.089130   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 10:39:54.095056   15688 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 10:39:54.095081   15688 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 10:39:54.107897   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 10:39:54.107918   15688 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 10:39:54.117686   15688 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 10:39:54.117706   15688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 10:39:54.154954   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 10:39:54.154980   15688 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 10:39:54.180072   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 10:39:54.349928   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 10:39:54.349960   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 10:39:54.352758   15688 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:39:54.352777   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 10:39:54.403949   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 10:39:54.403973   15688 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 10:39:54.406064   15688 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 10:39:54.406084   15688 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 10:39:54.412621   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 10:39:54.559915   15688 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 10:39:54.559944   15688 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 10:39:54.577552   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:39:54.637612   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 10:39:54.637638   15688 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 10:39:54.736770   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 10:39:54.736795   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 10:39:54.747427   15688 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 10:39:54.747442   15688 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 10:39:54.802922   15688 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 10:39:54.802944   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 10:39:54.952521   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 10:39:54.952549   15688 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 10:39:55.012578   15688 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 10:39:55.012602   15688 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 10:39:55.201048   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 10:39:55.213440   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 10:39:55.213460   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 10:39:55.215104   15688 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 10:39:55.215125   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 10:39:55.482880   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 10:39:55.558646   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 10:39:55.558680   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 10:39:55.848622   15688 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 10:39:55.848651   15688 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 10:39:56.205830   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 10:40:00.183700   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 10:40:00.183741   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:40:00.187707   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.188167   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:40:00.188198   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.188458   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:40:00.188691   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:40:00.188879   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:40:00.189074   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:40:00.496234   15688 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 10:40:00.607236   15688 addons.go:234] Setting addon gcp-auth=true in "addons-926744"
	I0603 10:40:00.607284   15688 host.go:66] Checking if "addons-926744" exists ...
	I0603 10:40:00.607584   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:40:00.607614   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:40:00.622715   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I0603 10:40:00.623087   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:40:00.623553   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:40:00.623575   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:40:00.623887   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:40:00.624477   15688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:40:00.624534   15688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:40:00.638893   15688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0603 10:40:00.639307   15688 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:40:00.639775   15688 main.go:141] libmachine: Using API Version  1
	I0603 10:40:00.639799   15688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:40:00.640135   15688 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:40:00.640361   15688 main.go:141] libmachine: (addons-926744) Calling .GetState
	I0603 10:40:00.642093   15688 main.go:141] libmachine: (addons-926744) Calling .DriverName
	I0603 10:40:00.642293   15688 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 10:40:00.642313   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHHostname
	I0603 10:40:00.644946   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.645338   15688 main.go:141] libmachine: (addons-926744) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:0f:40", ip: ""} in network mk-addons-926744: {Iface:virbr1 ExpiryTime:2024-06-03 11:39:15 +0000 UTC Type:0 Mac:52:54:00:ef:0f:40 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:addons-926744 Clientid:01:52:54:00:ef:0f:40}
	I0603 10:40:00.645377   15688 main.go:141] libmachine: (addons-926744) DBG | domain addons-926744 has defined IP address 192.168.39.188 and MAC address 52:54:00:ef:0f:40 in network mk-addons-926744
	I0603 10:40:00.645556   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHPort
	I0603 10:40:00.645735   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHKeyPath
	I0603 10:40:00.645919   15688 main.go:141] libmachine: (addons-926744) Calling .GetSSHUsername
	I0603 10:40:00.646060   15688 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/addons-926744/id_rsa Username:docker}
	I0603 10:40:01.615832   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.260649484s)
	I0603 10:40:01.615878   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.615892   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.615930   15688 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.223951245s)
	I0603 10:40:01.615986   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.131063292s)
	I0603 10:40:01.616002   15688 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 10:40:01.616021   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616033   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616043   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.103309213s)
	I0603 10:40:01.615945   15688 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.223983684s)
	I0603 10:40:01.616137   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.055128439s)
	I0603 10:40:01.616162   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616173   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616197   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.616245   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616250   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.019471736s)
	I0603 10:40:01.616254   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616262   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616266   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616269   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616275   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616330   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616341   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616344   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.969061716s)
	I0603 10:40:01.616377   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616391   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616351   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616437   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.600842817s)
	I0603 10:40:01.616446   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616457   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616467   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616501   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.616509   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.616517   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616524   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616535   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.527380307s)
	I0603 10:40:01.616551   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616560   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616627   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436530108s)
	I0603 10:40:01.616641   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616650   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616734   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.204085993s)
	I0603 10:40:01.616748   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616757   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616868   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039282818s)
	W0603 10:40:01.616888   15688 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 10:40:01.616082   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616933   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616950   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.415875301s)
	I0603 10:40:01.616965   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.616972   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.616987   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.616909   15688 retry.go:31] will retry after 294.17749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 10:40:01.617013   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.617007   15688 node_ready.go:35] waiting up to 6m0s for node "addons-926744" to be "Ready" ...
	I0603 10:40:01.617034   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617041   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617048   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617055   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617060   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.134137683s)
	I0603 10:40:01.617073   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617080   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617094   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.617111   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617118   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617126   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.617135   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.617201   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.617214   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.617229   15688 addons.go:475] Verifying addon ingress=true in "addons-926744"
	I0603 10:40:01.621964   15688 out.go:177] * Verifying ingress addon...
	I0603 10:40:01.619967   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.619992   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620031   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620048   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620071   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620087   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620102   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620118   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620132   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620149   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620162   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620174   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620190   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620209   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620225   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620238   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620254   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620270   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620279   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.620286   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620524   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.620559   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.623612   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623627   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623630   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623635   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623638   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623643   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623646   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623649   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623654   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623657   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623659   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623617   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623664   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623659   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623756   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623766   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623769   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623774   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623778   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623787   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623811   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623823   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.623839   15688 node_ready.go:49] node "addons-926744" has status "Ready":"True"
	I0603 10:40:01.623637   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.623870   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623649   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.623852   15688 node_ready.go:38] duration metric: took 6.828738ms for node "addons-926744" to be "Ready" ...
	I0603 10:40:01.623913   15688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:40:01.624543   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624547   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624564   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624589   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624596   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624599   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624611   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624619   15688 addons.go:475] Verifying addon registry=true in "addons-926744"
	I0603 10:40:01.624633   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624655   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624661   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.624662   15688 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 10:40:01.626417   15688 out.go:177] * Verifying registry addon...
	I0603 10:40:01.624704   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.624720   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.624739   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626148   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626360   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.626386   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:01.627757   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627775   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627794   15688 addons.go:475] Verifying addon metrics-server=true in "addons-926744"
	I0603 10:40:01.627801   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.627824   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.629201   15688 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-926744 service yakd-dashboard -n yakd-dashboard
	
	I0603 10:40:01.628687   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 10:40:01.665890   15688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:01.703186   15688 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 10:40:01.703216   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:01.703930   15688 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 10:40:01.703957   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:01.727637   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.727665   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.727945   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.727962   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 10:40:01.728041   15688 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0603 10:40:01.760352   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:01.760373   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:01.760748   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:01.760767   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:01.911744   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 10:40:02.124685   15688 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-926744" context rescaled to 1 replicas
	I0603 10:40:02.128168   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:02.135485   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:02.634185   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:02.644010   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.150486   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:03.183085   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.310046   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.104170666s)
	I0603 10:40:03.310123   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:03.310138   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:03.310136   15688 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.667819695s)
	I0603 10:40:03.312144   15688 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 10:40:03.310431   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:03.310461   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:03.313709   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:03.313729   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:03.313748   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:03.315246   15688 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 10:40:03.314091   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:03.314123   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:03.316651   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 10:40:03.316662   15688 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 10:40:03.316667   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:03.316686   15688 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-926744"
	I0603 10:40:03.318257   15688 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 10:40:03.320349   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 10:40:03.373928   15688 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 10:40:03.373959   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:03.411546   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 10:40:03.411581   15688 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 10:40:03.555366   15688 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 10:40:03.555394   15688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 10:40:03.620263   15688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 10:40:03.629888   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:03.641432   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:03.696242   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:03.862266   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:04.129927   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:04.134763   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:04.328658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:04.601877   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690066487s)
	I0603 10:40:04.601930   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:04.601944   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:04.602320   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:04.602384   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:04.602413   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:04.602430   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:04.602442   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:04.602723   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:04.602739   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:04.602760   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:04.629212   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:04.635368   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:04.828506   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.136297   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:05.148306   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:05.330123   15688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.709773748s)
	I0603 10:40:05.330171   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:05.330183   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:05.330465   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:05.330480   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:05.330484   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:05.330494   15688 main.go:141] libmachine: Making call to close driver server
	I0603 10:40:05.330503   15688 main.go:141] libmachine: (addons-926744) Calling .Close
	I0603 10:40:05.331059   15688 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:40:05.331077   15688 main.go:141] libmachine: (addons-926744) DBG | Closing plugin on server side
	I0603 10:40:05.331081   15688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:40:05.332835   15688 addons.go:475] Verifying addon gcp-auth=true in "addons-926744"
	I0603 10:40:05.334688   15688 out.go:177] * Verifying gcp-auth addon...
	I0603 10:40:05.337167   15688 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 10:40:05.363481   15688 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 10:40:05.363509   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:05.374866   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.630097   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:05.635426   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:05.829211   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:05.841230   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:06.130033   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:06.136346   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:06.180170   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:06.326718   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:06.340674   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:06.628595   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:06.634938   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:06.826372   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:06.840443   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:07.132270   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:07.135483   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:07.326067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:07.341102   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:07.631054   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:07.642137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:07.826543   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:07.844067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:08.128816   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:08.135663   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:08.326284   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:08.340963   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:08.629306   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:08.635590   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:08.671262   15688 pod_ready.go:102] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:08.826064   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:08.840446   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:09.145318   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:09.147698   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:09.326715   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:09.340658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:09.631103   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:09.636087   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:09.825527   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:09.840886   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:10.129875   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:10.135472   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:10.326000   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:10.340340   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:10.630564   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:10.635504   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:10.672419   15688 pod_ready.go:97] pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:40:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.188 HostIPs:[{IP:192.168.39
.188}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-06-03 10:39:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 10:39:57 +0000 UTC,FinishedAt:2024-06-03 10:40:09 +0000 UTC,ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9 Started:0xc000656fe0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 10:40:10.672457   15688 pod_ready.go:81] duration metric: took 9.006533452s for pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace to be "Ready" ...
	E0603 10:40:10.672472   15688 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-tq56p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:40:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 10:39:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.188 HostIPs:[{IP:192.168.39.188}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-06-03 10:39:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 10:39:57 +0000 UTC,FinishedAt:2024-06-03 10:40:09 +0000 UTC,ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c1415dec3b0fda1bf1788f751be03f19dcb79bd765d56be5eb6284f6d12bd2a9 Started:0xc000656fe0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 10:40:10.672481   15688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.683729   15688 pod_ready.go:92] pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.684037   15688 pod_ready.go:81] duration metric: took 11.540399ms for pod "coredns-7db6d8ff4d-x6wn8" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.684054   15688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.692604   15688 pod_ready.go:92] pod "etcd-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.692627   15688 pod_ready.go:81] duration metric: took 8.564911ms for pod "etcd-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.692638   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.699066   15688 pod_ready.go:92] pod "kube-apiserver-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.699088   15688 pod_ready.go:81] duration metric: took 6.441568ms for pod "kube-apiserver-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.699099   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.710383   15688 pod_ready.go:92] pod "kube-controller-manager-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:10.710400   15688 pod_ready.go:81] duration metric: took 11.29407ms for pod "kube-controller-manager-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.710409   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wc47p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:10.825623   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:10.840505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:11.070437   15688 pod_ready.go:92] pod "kube-proxy-wc47p" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:11.070460   15688 pod_ready.go:81] duration metric: took 360.044521ms for pod "kube-proxy-wc47p" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.070469   15688 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.129429   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:11.134543   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:11.325359   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:11.340881   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:11.469648   15688 pod_ready.go:92] pod "kube-scheduler-addons-926744" in "kube-system" namespace has status "Ready":"True"
	I0603 10:40:11.469670   15688 pod_ready.go:81] duration metric: took 399.194726ms for pod "kube-scheduler-addons-926744" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.469678   15688 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace to be "Ready" ...
	I0603 10:40:11.629616   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:11.634580   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:11.826206   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:11.842414   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:12.128819   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:12.135371   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:12.325990   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:12.340191   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:12.629551   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:12.637148   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:12.825982   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:12.840395   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:13.131192   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:13.134901   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:13.327437   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:13.341316   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:13.475585   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:13.629909   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:13.634713   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:13.827481   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:13.840638   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:14.130436   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:14.135213   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:14.327750   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:14.342578   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:14.628282   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:14.636261   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:14.827157   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:14.841004   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:15.129666   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:15.135484   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:15.327012   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:15.340817   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:15.480271   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:15.630553   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:15.642396   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:15.825563   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:15.840840   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:16.129494   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:16.134862   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:16.328145   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:16.341239   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:16.629033   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:16.635666   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:16.826137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:16.842413   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:17.128510   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:17.134310   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:17.639330   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:17.641936   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:17.644549   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:17.644668   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:17.646452   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:17.827109   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:17.839906   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:18.131250   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:18.136095   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:18.325536   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:18.340803   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:18.630421   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:18.641150   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:18.826391   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:18.841024   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.130520   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:19.135491   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:19.326800   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:19.340947   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.629697   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:19.634495   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:19.826880   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:19.840954   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:19.975444   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:20.129189   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:20.135023   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:20.327596   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:20.341178   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:20.632277   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:20.640854   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:20.829833   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:20.841120   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:21.129312   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:21.135146   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:21.325994   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:21.340037   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:21.638751   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:21.640930   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:21.825519   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:21.841394   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:22.129089   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:22.135230   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:22.327888   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:22.340872   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:22.477257   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:22.629938   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:22.637762   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:22.825272   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:22.840658   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:23.130748   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:23.135616   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:23.327124   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:23.340567   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:23.629702   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:23.635908   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:23.825894   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:23.841191   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.128746   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:24.136152   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:24.327291   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:24.340673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.629773   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:24.635444   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:24.827411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:24.840534   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:24.976228   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:25.129524   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:25.134893   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:25.326446   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:25.340796   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:25.629365   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:25.637501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.402192   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:26.406597   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.407792   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:26.408915   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.411677   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:26.415652   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.629288   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:26.636419   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:26.830129   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:26.841005   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:27.128996   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:27.135247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:27.326005   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:27.340449   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:27.476896   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:27.630680   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:27.634738   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:27.826937   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:27.840335   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:28.129121   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:28.136173   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:28.328310   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:28.341964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:28.631168   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:28.634825   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:28.826349   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:28.842499   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.129599   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:29.135004   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:29.328498   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:29.341148   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.629606   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:29.635069   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:29.826876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:29.841991   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:29.974719   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:30.130126   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:30.140275   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:30.325918   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:30.341379   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:30.628915   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:30.635247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:30.825972   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:30.841008   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.129679   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:31.134765   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:31.326852   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:31.341006   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.629386   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:31.635007   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:31.826472   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:31.842581   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:31.981443   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:32.129242   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:32.135769   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:32.325505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:32.340407   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:32.629524   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:32.634756   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:32.824742   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:32.840440   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:33.128924   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:33.134758   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:33.325481   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:33.340100   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:33.629037   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:33.635190   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:33.826052   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:33.841685   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:34.128359   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:34.135283   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:34.327775   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:34.341317   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:34.475564   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:34.629621   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:34.635383   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:34.826833   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:34.842712   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:35.128988   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:35.134900   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:35.325499   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:35.340319   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:35.629419   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:35.635252   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:35.825864   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:35.841635   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.128448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:36.134236   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:36.326392   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:36.340896   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.629496   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:36.634506   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:36.827828   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:36.849174   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:36.975547   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:37.130540   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:37.134825   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:37.327677   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:37.340937   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:37.629546   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:37.635213   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:37.825812   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:37.841384   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:38.130158   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:38.141363   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:38.326495   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:38.342320   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:38.629165   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:38.635462   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:38.827806   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:38.840411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:39.128833   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:39.135888   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:39.325539   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:39.340789   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:39.478099   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:39.628973   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:39.634808   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:39.828979   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:39.841217   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:40.129266   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:40.135431   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:40.326112   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:40.342062   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:40.628890   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:40.635691   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:40.829492   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:40.841415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:41.129636   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:41.135254   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:41.327158   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:41.343083   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:41.480582   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:41.629429   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:41.634488   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:41.825415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:41.845010   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:42.128445   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:42.136433   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:42.326560   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:42.340936   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:42.628624   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:42.634483   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:42.828415   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:42.842167   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.128983   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:43.134991   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:43.325774   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:43.343629   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.630166   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:43.642203   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:43.826455   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:43.842780   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:43.975623   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:44.130773   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:44.136753   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:44.325363   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:44.340172   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:44.628197   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:44.635016   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:44.825430   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:44.841480   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:45.128813   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:45.134615   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:45.325119   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:45.341332   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:45.630019   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:45.634645   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:45.827281   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:45.844284   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:46.123181   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:46.128042   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:46.136021   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:46.326013   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:46.340854   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:46.629077   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:46.635411   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:46.827647   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:46.840477   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:47.129277   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:47.136137   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:47.327790   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:47.343425   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:47.629574   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:47.635443   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:47.826178   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:47.840781   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.132576   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:48.139351   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:48.326805   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:48.341385   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.475655   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:48.629433   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:48.634673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:48.887233   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:48.896745   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.129278   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:49.141382   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:49.326073   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.345417   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:49.628578   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:49.636377   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:49.829768   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:49.840646   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:50.132725   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:50.145717   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:50.344159   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:50.346772   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:50.481980   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:50.628717   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:50.637554   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:50.826283   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:50.839945   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:51.129008   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:51.134814   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:51.325947   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:51.340011   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:51.628353   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:51.635095   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:51.825694   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:51.840445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.129368   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:52.137157   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:52.326504   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:52.340540   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.628391   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:52.635161   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:52.825928   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:52.840911   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:52.975304   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:53.129276   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:53.134387   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:53.327275   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:53.341067   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:53.629664   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:53.635059   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:53.826103   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:53.840089   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.129403   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:54.135339   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:54.326160   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:54.341098   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.628626   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:54.635110   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:54.825638   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:54.840920   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:54.976126   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:55.128855   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:55.135486   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:55.326302   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:55.354256   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:55.629568   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:55.635361   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:55.825695   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:55.840548   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:56.129253   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:56.135445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:56.328838   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:56.340869   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:56.629079   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:56.634732   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:56.825627   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:56.841266   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:57.129251   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:57.135876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:57.327844   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:57.340616   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:57.475925   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:57.629230   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:57.635319   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:57.825964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:57.840573   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:58.132600   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:58.142104   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:58.326143   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:58.340419   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:58.629573   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:58.635694   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:58.826307   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:58.841345   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:59.129395   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:59.135165   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 10:40:59.332535   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:59.352873   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:40:59.796052   15688 kapi.go:107] duration metric: took 58.167361199s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 10:40:59.796748   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:40:59.800176   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:40:59.827984   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:40:59.841880   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:00.129686   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:00.338490   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:00.339869   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:00.629037   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:00.828205   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:00.843203   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.129448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:01.325824   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:01.341038   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.628663   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:01.825801   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:01.841104   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:01.976084   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:02.129188   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:02.326246   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:02.341316   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:02.633992   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:02.826176   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:02.841389   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:03.129858   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:03.325032   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:03.340708   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:03.629836   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:03.825797   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:03.840983   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:04.129495   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:04.330876   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:04.350196   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:04.475180   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:04.629092   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:04.825338   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:04.840538   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:05.129237   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:05.325247   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:05.340168   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:05.631124   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:05.826821   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:05.841336   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:06.131560   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:06.337084   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:06.351680   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:06.481809   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:06.628040   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:06.830200   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:06.840323   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:07.128334   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:07.325678   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:07.341177   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:07.628810   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:07.827618   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:07.841002   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.131072   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:08.326603   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:08.341237   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.631059   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:08.832500   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:08.840434   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:08.977459   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:09.130397   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:09.326201   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:09.342569   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:09.629886   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:09.826229   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:09.840149   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.128759   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:10.325673   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:10.340707   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.628816   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:10.826811   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:10.841339   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:10.977780   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:11.128454   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:11.327375   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:11.340593   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:11.628668   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:11.825820   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:11.840516   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:12.130906   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:12.741967   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:12.742034   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:12.745893   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:12.826186   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:12.840445   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:13.128987   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:13.326928   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:13.340345   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:13.475518   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:13.629130   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:13.825690   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:13.841268   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:14.128939   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:14.325361   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:14.340314   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:14.629239   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:14.828987   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:14.846710   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:15.129087   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:15.325909   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:15.341413   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:15.478787   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:15.629448   15688 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 10:41:15.826033   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:15.842079   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:16.149641   15688 kapi.go:107] duration metric: took 1m14.524974589s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 10:41:16.329501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:16.341596   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:16.826047   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:16.840410   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.325501   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:17.340621   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.825646   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:17.840635   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:17.976578   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:18.325555   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:18.340684   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:18.825925   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:18.839718   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:19.325964   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:19.339834   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:19.825766   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:19.840578   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:20.326143   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:20.340592   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:20.475743   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:20.825935   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:20.840210   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 10:41:21.332189   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:21.340994   15688 kapi.go:107] duration metric: took 1m16.003829036s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 10:41:21.342646   15688 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-926744 cluster.
	I0603 10:41:21.344099   15688 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 10:41:21.345330   15688 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 10:41:21.825816   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:22.326505   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:22.483600   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:22.825431   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:23.327541   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:23.826799   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.339560   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.831668   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:24.984779   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:25.326088   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:25.998518   15688 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 10:41:26.326335   15688 kapi.go:107] duration metric: took 1m23.005982669s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 10:41:26.328081   15688 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, storage-provisioner, metrics-server, ingress-dns, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0603 10:41:26.329460   15688 addons.go:510] duration metric: took 1m33.383589678s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget storage-provisioner metrics-server ingress-dns helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0603 10:41:27.478177   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:29.478428   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:31.483747   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:33.977262   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:36.476890   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:38.975796   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:41.476456   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:43.484228   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:45.976154   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:48.476134   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:50.975934   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:53.478367   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:55.976681   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:58.476504   15688 pod_ready.go:102] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"False"
	I0603 10:41:59.477100   15688 pod_ready.go:92] pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace has status "Ready":"True"
	I0603 10:41:59.477121   15688 pod_ready.go:81] duration metric: took 1m48.007436943s for pod "metrics-server-c59844bb4-gsd5w" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.477131   15688 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.481313   15688 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace has status "Ready":"True"
	I0603 10:41:59.481330   15688 pod_ready.go:81] duration metric: took 4.193913ms for pod "nvidia-device-plugin-daemonset-xsjk2" in "kube-system" namespace to be "Ready" ...
	I0603 10:41:59.481350   15688 pod_ready.go:38] duration metric: took 1m57.857362978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:41:59.481364   15688 api_server.go:52] waiting for apiserver process to appear ...
	I0603 10:41:59.481405   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:41:59.481454   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:41:59.576247   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:41:59.576271   15688 cri.go:89] found id: ""
	I0603 10:41:59.576280   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:41:59.576338   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.580743   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:41:59.580799   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:41:59.629996   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:41:59.630019   15688 cri.go:89] found id: ""
	I0603 10:41:59.630027   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:41:59.630080   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.637789   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:41:59.637854   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:41:59.680848   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:41:59.680872   15688 cri.go:89] found id: ""
	I0603 10:41:59.680880   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:41:59.680931   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.685110   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:41:59.685161   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:41:59.724139   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:41:59.724164   15688 cri.go:89] found id: ""
	I0603 10:41:59.724175   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:41:59.724228   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.728217   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:41:59.728274   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:41:59.766380   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:41:59.766401   15688 cri.go:89] found id: ""
	I0603 10:41:59.766408   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:41:59.766451   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.772251   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:41:59.772318   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:41:59.819207   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:41:59.819231   15688 cri.go:89] found id: ""
	I0603 10:41:59.819240   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:41:59.819292   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:41:59.823970   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:41:59.824026   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:41:59.861614   15688 cri.go:89] found id: ""
	I0603 10:41:59.861645   15688 logs.go:276] 0 containers: []
	W0603 10:41:59.861655   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:41:59.861666   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:41:59.861685   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:41:59.921615   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:41:59.921647   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:41:59.969437   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:41:59.969472   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:00.051950   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:00.051984   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:00.068389   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:00.068425   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:00.107686   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:00.107715   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:00.148614   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:00.148644   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:00.194053   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:00.194083   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:00.320720   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:00.320746   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:00.370323   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:00.370351   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:00.422239   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:00.422271   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:03.562052   15688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 10:42:03.581879   15688 api_server.go:72] duration metric: took 2m10.636034699s to wait for apiserver process to appear ...
	I0603 10:42:03.581909   15688 api_server.go:88] waiting for apiserver healthz status ...
	I0603 10:42:03.581944   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:42:03.582007   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:42:03.621522   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:03.621555   15688 cri.go:89] found id: ""
	I0603 10:42:03.621565   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:42:03.621625   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.626419   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:42:03.626485   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:42:03.664347   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:03.664371   15688 cri.go:89] found id: ""
	I0603 10:42:03.664379   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:42:03.664430   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.668277   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:42:03.668334   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:42:03.706121   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:03.706142   15688 cri.go:89] found id: ""
	I0603 10:42:03.706151   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:42:03.706199   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.709966   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:42:03.710012   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:42:03.747510   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:03.747534   15688 cri.go:89] found id: ""
	I0603 10:42:03.747541   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:42:03.747579   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.751534   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:42:03.751590   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:42:03.797244   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:03.797271   15688 cri.go:89] found id: ""
	I0603 10:42:03.797281   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:42:03.797340   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.801747   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:42:03.801810   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:42:03.838254   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:03.838281   15688 cri.go:89] found id: ""
	I0603 10:42:03.838290   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:42:03.838339   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:03.842636   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:42:03.842702   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:42:03.880139   15688 cri.go:89] found id: ""
	I0603 10:42:03.880167   15688 logs.go:276] 0 containers: []
	W0603 10:42:03.880177   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:42:03.880187   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:42:03.880199   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:03.968375   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:03.968417   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:03.983683   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:03.983718   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:04.031500   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:04.031532   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:04.069096   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:04.069133   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:04.803447   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:04.803494   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:04.916452   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:04.916491   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:04.968508   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:04.968533   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:05.028125   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:05.028155   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:05.067620   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:42:05.067646   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:05.135035   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:42:05.135072   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:42:07.693030   15688 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0603 10:42:07.698222   15688 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I0603 10:42:07.699367   15688 api_server.go:141] control plane version: v1.30.1
	I0603 10:42:07.699389   15688 api_server.go:131] duration metric: took 4.117473615s to wait for apiserver health ...
	I0603 10:42:07.699396   15688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 10:42:07.699415   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 10:42:07.699457   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 10:42:07.743217   15688 cri.go:89] found id: "d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:07.743243   15688 cri.go:89] found id: ""
	I0603 10:42:07.743251   15688 logs.go:276] 1 containers: [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d]
	I0603 10:42:07.743291   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.747333   15688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 10:42:07.747379   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 10:42:07.785630   15688 cri.go:89] found id: "0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:07.785648   15688 cri.go:89] found id: ""
	I0603 10:42:07.785654   15688 logs.go:276] 1 containers: [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0]
	I0603 10:42:07.785693   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.790018   15688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 10:42:07.790067   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 10:42:07.831428   15688 cri.go:89] found id: "3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:07.831448   15688 cri.go:89] found id: ""
	I0603 10:42:07.831455   15688 logs.go:276] 1 containers: [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846]
	I0603 10:42:07.831495   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.835559   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 10:42:07.835610   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 10:42:07.884774   15688 cri.go:89] found id: "b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:07.884801   15688 cri.go:89] found id: ""
	I0603 10:42:07.884811   15688 logs.go:276] 1 containers: [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708]
	I0603 10:42:07.884858   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.889297   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 10:42:07.889345   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 10:42:07.926750   15688 cri.go:89] found id: "3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:07.926772   15688 cri.go:89] found id: ""
	I0603 10:42:07.926781   15688 logs.go:276] 1 containers: [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56]
	I0603 10:42:07.926825   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.930781   15688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 10:42:07.930837   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 10:42:07.968207   15688 cri.go:89] found id: "5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:07.968227   15688 cri.go:89] found id: ""
	I0603 10:42:07.968234   15688 logs.go:276] 1 containers: [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a]
	I0603 10:42:07.968288   15688 ssh_runner.go:195] Run: which crictl
	I0603 10:42:07.973021   15688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 10:42:07.973088   15688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 10:42:08.012257   15688 cri.go:89] found id: ""
	I0603 10:42:08.012280   15688 logs.go:276] 0 containers: []
	W0603 10:42:08.012287   15688 logs.go:278] No container was found matching "kindnet"
	I0603 10:42:08.012296   15688 logs.go:123] Gathering logs for describe nodes ...
	I0603 10:42:08.012312   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 10:42:08.140174   15688 logs.go:123] Gathering logs for coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] ...
	I0603 10:42:08.140197   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846"
	I0603 10:42:08.179355   15688 logs.go:123] Gathering logs for etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] ...
	I0603 10:42:08.179393   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0"
	I0603 10:42:08.228118   15688 logs.go:123] Gathering logs for kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] ...
	I0603 10:42:08.228146   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708"
	I0603 10:42:08.272516   15688 logs.go:123] Gathering logs for kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] ...
	I0603 10:42:08.272544   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56"
	I0603 10:42:08.308487   15688 logs.go:123] Gathering logs for kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] ...
	I0603 10:42:08.308511   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a"
	I0603 10:42:08.373386   15688 logs.go:123] Gathering logs for CRI-O ...
	I0603 10:42:08.373414   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 10:42:09.218176   15688 logs.go:123] Gathering logs for kubelet ...
	I0603 10:42:09.218213   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 10:42:09.301241   15688 logs.go:123] Gathering logs for dmesg ...
	I0603 10:42:09.301288   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 10:42:09.321049   15688 logs.go:123] Gathering logs for kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] ...
	I0603 10:42:09.321083   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d"
	I0603 10:42:09.380064   15688 logs.go:123] Gathering logs for container status ...
	I0603 10:42:09.380094   15688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 10:42:11.947596   15688 system_pods.go:59] 18 kube-system pods found
	I0603 10:42:11.947626   15688 system_pods.go:61] "coredns-7db6d8ff4d-x6wn8" [92e13ca5-45f1-4604-a816-b890269a86e9] Running
	I0603 10:42:11.947631   15688 system_pods.go:61] "csi-hostpath-attacher-0" [6f6ae728-2676-48ad-a8bb-c277fafb0fc5] Running
	I0603 10:42:11.947636   15688 system_pods.go:61] "csi-hostpath-resizer-0" [241ed1e6-7eea-41e5-a1f5-df7de8ba25ba] Running
	I0603 10:42:11.947639   15688 system_pods.go:61] "csi-hostpathplugin-rkcvf" [5bc77713-f4d6-478a-bce8-b0197f258ad0] Running
	I0603 10:42:11.947643   15688 system_pods.go:61] "etcd-addons-926744" [556219f4-7461-4935-abf0-a63c9923ca5c] Running
	I0603 10:42:11.947646   15688 system_pods.go:61] "kube-apiserver-addons-926744" [977aebf3-f958-46ef-bee0-014cecbb238f] Running
	I0603 10:42:11.947649   15688 system_pods.go:61] "kube-controller-manager-addons-926744" [566a0d21-83dd-4c4e-9ac0-461af574eb5f] Running
	I0603 10:42:11.947651   15688 system_pods.go:61] "kube-ingress-dns-minikube" [b2df4538-5f55-4952-9579-2cf3d39182c2] Running
	I0603 10:42:11.947654   15688 system_pods.go:61] "kube-proxy-wc47p" [a4052b1a-d14e-4679-8c52-6ebf348b3900] Running
	I0603 10:42:11.947657   15688 system_pods.go:61] "kube-scheduler-addons-926744" [c84ac4e4-3010-4816-9b5a-3cf331ca3f19] Running
	I0603 10:42:11.947660   15688 system_pods.go:61] "metrics-server-c59844bb4-gsd5w" [23f016d5-3265-4e2c-abb2-940fc0259aab] Running
	I0603 10:42:11.947663   15688 system_pods.go:61] "nvidia-device-plugin-daemonset-xsjk2" [6e714474-e47d-438a-8c5f-6f4fc07169af] Running
	I0603 10:42:11.947666   15688 system_pods.go:61] "registry-proxy-mhm9h" [28fbb401-9bee-4e8b-98e2-67e9fbcc54d4] Running
	I0603 10:42:11.947669   15688 system_pods.go:61] "registry-v8sfs" [ae4c2ffe-ab57-4327-a6c0-25504bcd327b] Running
	I0603 10:42:11.947673   15688 system_pods.go:61] "snapshot-controller-745499f584-vbr9k" [f9cdeeee-e6e5-448b-b16f-2672c1794671] Running
	I0603 10:42:11.947676   15688 system_pods.go:61] "snapshot-controller-745499f584-zjct2" [9ad43034-4603-4931-b3c3-fcbe981ba9fa] Running
	I0603 10:42:11.947681   15688 system_pods.go:61] "storage-provisioner" [6d7d74e2-9171-42f1-8cc1-f1708d0d6470] Running
	I0603 10:42:11.947684   15688 system_pods.go:61] "tiller-deploy-6677d64bcd-9kcxj" [fc636068-af58-4546-9600-7cee9712ca32] Running
	I0603 10:42:11.947690   15688 system_pods.go:74] duration metric: took 4.248289497s to wait for pod list to return data ...
	I0603 10:42:11.947700   15688 default_sa.go:34] waiting for default service account to be created ...
	I0603 10:42:11.950145   15688 default_sa.go:45] found service account: "default"
	I0603 10:42:11.950167   15688 default_sa.go:55] duration metric: took 2.458234ms for default service account to be created ...
	I0603 10:42:11.950174   15688 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 10:42:11.957587   15688 system_pods.go:86] 18 kube-system pods found
	I0603 10:42:11.957614   15688 system_pods.go:89] "coredns-7db6d8ff4d-x6wn8" [92e13ca5-45f1-4604-a816-b890269a86e9] Running
	I0603 10:42:11.957620   15688 system_pods.go:89] "csi-hostpath-attacher-0" [6f6ae728-2676-48ad-a8bb-c277fafb0fc5] Running
	I0603 10:42:11.957625   15688 system_pods.go:89] "csi-hostpath-resizer-0" [241ed1e6-7eea-41e5-a1f5-df7de8ba25ba] Running
	I0603 10:42:11.957628   15688 system_pods.go:89] "csi-hostpathplugin-rkcvf" [5bc77713-f4d6-478a-bce8-b0197f258ad0] Running
	I0603 10:42:11.957633   15688 system_pods.go:89] "etcd-addons-926744" [556219f4-7461-4935-abf0-a63c9923ca5c] Running
	I0603 10:42:11.957637   15688 system_pods.go:89] "kube-apiserver-addons-926744" [977aebf3-f958-46ef-bee0-014cecbb238f] Running
	I0603 10:42:11.957641   15688 system_pods.go:89] "kube-controller-manager-addons-926744" [566a0d21-83dd-4c4e-9ac0-461af574eb5f] Running
	I0603 10:42:11.957650   15688 system_pods.go:89] "kube-ingress-dns-minikube" [b2df4538-5f55-4952-9579-2cf3d39182c2] Running
	I0603 10:42:11.957654   15688 system_pods.go:89] "kube-proxy-wc47p" [a4052b1a-d14e-4679-8c52-6ebf348b3900] Running
	I0603 10:42:11.957658   15688 system_pods.go:89] "kube-scheduler-addons-926744" [c84ac4e4-3010-4816-9b5a-3cf331ca3f19] Running
	I0603 10:42:11.957662   15688 system_pods.go:89] "metrics-server-c59844bb4-gsd5w" [23f016d5-3265-4e2c-abb2-940fc0259aab] Running
	I0603 10:42:11.957667   15688 system_pods.go:89] "nvidia-device-plugin-daemonset-xsjk2" [6e714474-e47d-438a-8c5f-6f4fc07169af] Running
	I0603 10:42:11.957671   15688 system_pods.go:89] "registry-proxy-mhm9h" [28fbb401-9bee-4e8b-98e2-67e9fbcc54d4] Running
	I0603 10:42:11.957676   15688 system_pods.go:89] "registry-v8sfs" [ae4c2ffe-ab57-4327-a6c0-25504bcd327b] Running
	I0603 10:42:11.957683   15688 system_pods.go:89] "snapshot-controller-745499f584-vbr9k" [f9cdeeee-e6e5-448b-b16f-2672c1794671] Running
	I0603 10:42:11.957687   15688 system_pods.go:89] "snapshot-controller-745499f584-zjct2" [9ad43034-4603-4931-b3c3-fcbe981ba9fa] Running
	I0603 10:42:11.957695   15688 system_pods.go:89] "storage-provisioner" [6d7d74e2-9171-42f1-8cc1-f1708d0d6470] Running
	I0603 10:42:11.957699   15688 system_pods.go:89] "tiller-deploy-6677d64bcd-9kcxj" [fc636068-af58-4546-9600-7cee9712ca32] Running
	I0603 10:42:11.957704   15688 system_pods.go:126] duration metric: took 7.525434ms to wait for k8s-apps to be running ...
	I0603 10:42:11.957709   15688 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 10:42:11.957752   15688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:42:11.973332   15688 system_svc.go:56] duration metric: took 15.613734ms WaitForService to wait for kubelet
	I0603 10:42:11.973362   15688 kubeadm.go:576] duration metric: took 2m19.027522983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:42:11.973385   15688 node_conditions.go:102] verifying NodePressure condition ...
	I0603 10:42:11.976121   15688 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:42:11.976145   15688 node_conditions.go:123] node cpu capacity is 2
	I0603 10:42:11.976157   15688 node_conditions.go:105] duration metric: took 2.766495ms to run NodePressure ...
	I0603 10:42:11.976167   15688 start.go:240] waiting for startup goroutines ...
	I0603 10:42:11.976174   15688 start.go:245] waiting for cluster config update ...
	I0603 10:42:11.976188   15688 start.go:254] writing updated cluster config ...
	I0603 10:42:11.976441   15688 ssh_runner.go:195] Run: rm -f paused
	I0603 10:42:12.026495   15688 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 10:42:12.028649   15688 out.go:177] * Done! kubectl is now configured to use "addons-926744" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.730753642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411685730728175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=246d2287-8624-4fc9-bdd1-1c41169cdd75 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.731388202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55ae467a-aaa9-41f9-8d91-048c4507d1e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.731512565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55ae467a-aaa9-41f9-8d91-048c4507d1e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.731817021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:C
ONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/lo
cal-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5bbe66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033db3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2
009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcb
b77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a6
8b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76
ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513
cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55ae467a-aaa9-41f9-8d91-048c4507d1e5 name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.765736890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c53e912-a4d6-41f4-867b-2a17cc4aed6c name=/runtime.v1.RuntimeService/Version
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.765807534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c53e912-a4d6-41f4-867b-2a17cc4aed6c name=/runtime.v1.RuntimeService/Version
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.767124408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e594bea1-8211-4b2c-9f9f-f599691985d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.768306266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411685768283594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e594bea1-8211-4b2c-9f9f-f599691985d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.768892252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff20fd13-223e-42ab-91fd-3856206748a6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.768943771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff20fd13-223e-42ab-91fd-3856206748a6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.769353769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:C
ONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/lo
cal-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5bbe66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033db3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2
009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcb
b77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a6
8b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76
ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513
cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff20fd13-223e-42ab-91fd-3856206748a6 name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.809733050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f5af54d-e809-480f-a6cc-a73cb31d1181 name=/runtime.v1.RuntimeService/Version
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.809795403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f5af54d-e809-480f-a6cc-a73cb31d1181 name=/runtime.v1.RuntimeService/Version
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.811273025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce284d62-723e-4731-af14-834798ef2a95 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.812618339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717411685812588047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce284d62-723e-4731-af14-834798ef2a95 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.813504708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbdb47a4-9423-4a78-8983-2b5ad235577d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.813574564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbdb47a4-9423-4a78-8983-2b5ad235577d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.813855930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74b8be293d0ebe7b326246e1997cbb4359f15be3c3d8c483aedad2a18e553f70,PodSandboxId:3525cac8b28dfb7dd9134eafac5800fe9650ace5531d9d16330c90e9745527ff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717411490087680332,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-ksqv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3832537b-81cc-4b24-a14b-af5ebcdbf83d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b3a229a,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5d525fcd5d35fd513815d573661881e61d333b6223ffbf64accb1140d9f08,PodSandboxId:5c5a819be7c6f1bc4227b11997dc6e1c8612b484ebfe56b8dca3f6ce2d6b5af3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717411350997643791,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2491ce04-859e-4df5-a082-1f95450cf4b1,},Annotations:map[string]string{io.kubern
etes.container.hash: 14ce3732,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042cb7022a28f047a74fe701a2dbf071db18b0b177620077e90cbe0344c9f23f,PodSandboxId:d782ee88808174b9d8e2a596c1b93e97ce9d6304537b537569d54e99e1a50608,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717411339421952638,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-7jxcw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61e5ce61-19bd-4190-a787-83d69ca4a957,},Annotations:map[string]string{io.kubernetes.container.hash: c01c48c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3,PodSandboxId:d60392e614597f121fdfd72812a5d531145a1ace2b5aa35f9462ec9f3e4a953f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717411280607764906,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-zspc9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aa8cd347-96fe-4345-85ba-fa78e3b4f117,},Annotations:map[string]string{io.kubernetes.container.hash: c9b3f2b0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303,PodSandboxId:af68176d576dcd65559b079e03da774eb06414e37b4f95c7e16053760fcb8a7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:C
ONTAINER_RUNNING,CreatedAt:1717411249295285213,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gsd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f016d5-3265-4e2c-abb2-940fc0259aab,},Annotations:map[string]string{io.kubernetes.container.hash: e617ab66,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c3ea3f2517eb7d3a8c1d0ed3865f50eb49ba33713308c32297a0cef952c65f,PodSandboxId:779b9dd0801e20a2199f5814bff59a8ebac15c572ff13bf4dc5121fa7fd62608,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717411246837618579,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-ljsqm,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4efec2ba-9b7e-4693-984d-3f075be141e3,},Annotations:map[string]string{io.kubernetes.container.hash: 90cf0271,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c5c0dcb78f9c12afffd5a7364774452b71496f55fe04599af158f992fb6cab,PodSandboxId:626e221da353770ee980b3da595cd5e319bb40528dc9d1bdc1f761a83a73ac9d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/lo
cal-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717411238079768404,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-hkptp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f20ebd96-b074-4a16-b696-94a3d971de4b,},Annotations:map[string]string{io.kubernetes.container.hash: 37e016a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf,PodSandboxId:d7d998ba525a63885e34b947e49940ef2b5bbe66d22f4f29d21543519707e398,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717411200016963811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7d74e2-9171-42f1-8cc1-f1708d0d6470,},Annotations:map[string]string{io.kubernetes.container.hash: 29338b53,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846,PodSandboxId:687420cda82fff91f2c6c5947d206467a859933aeb08033db3bed8c5130205c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717411197138076414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x6wn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e13ca5-45f1-4604-a816-b890269a86e9,},Annotations:map[string]string{io.kubernetes.container.hash: 846d61ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2
009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56,PodSandboxId:b800364548168441ce7d1381dea23d4f26404d124526cf8184c9be3e0a025fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717411195169402413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wc47p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4052b1a-d14e-4679-8c52-6ebf348b3900,},Annotations:map[string]string{io.kubernetes.container.hash: 433006be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b083e9d8cd1f295f9048fe1ad2f9efbcb
b77ae70ee9e952bc18d63662d708,PodSandboxId:76e012f1fc0de5a204e7ecf78b2c36a5483aa77220f0a27666f563997324a38e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717411174589005104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ec6fddebeecbd8ac05ced6d1be357f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a6
8b4d7002d5a,PodSandboxId:5243863130f20b12212355440498ce6305e444cd5110a0b278640e080ec5eab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717411174603647339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac084f99407b2fade6f72f20a876eab3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76
ea9854262db7db62378b0,PodSandboxId:3ef4f4f68cc5b4edf84662960e2845c7056338fa8ddf10cfafa77900bce9b860,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717411174597777121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9e7fbd45f6c9334ede7759d0d4e3fe,},Annotations:map[string]string{io.kubernetes.container.hash: ca1efab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d,PodSandboxId:8b0af8494513
cf70c01d4d594f12219425b380fc8ff3d9e58506338c03731983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717411174338711993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-926744,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c600eb564693e5b6fd70209818264,},Annotations:map[string]string{io.kubernetes.container.hash: e75f1474,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbdb47a4-9423-4a78-8983-2b5ad235577d name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.837004129Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303.DH5AO2\"" file="server/server.go:805"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.837331181Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303.DH5AO2\"" file="server/server.go:805"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.840842845Z" level=debug msg="Container or sandbox exited: 9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303.DH5AO2" file="server/server.go:810"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.837353742Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303\"" file="server/server.go:805"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.840886516Z" level=debug msg="Container or sandbox exited: 9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303" file="server/server.go:810"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.840906083Z" level=debug msg="container exited and found: 9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303" file="server/server.go:825"
	Jun 03 10:48:05 addons-926744 crio[677]: time="2024-06-03 10:48:05.837358973Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/9712bf2d29de56f3c2dc6a1cf3109331f414452f77e6a0598140b229a1470303.DH5AO2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74b8be293d0eb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   3525cac8b28df       hello-world-app-86c47465fc-ksqv6
	1ad5d525fcd5d       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                         5 minutes ago       Running             nginx                     0                   5c5a819be7c6f       nginx
	042cb7022a28f       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   5 minutes ago       Running             headlamp                  0                   d782ee8880817       headlamp-68456f997b-7jxcw
	2ca4f0b5927ce       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   d60392e614597       gcp-auth-5db96cd9b4-zspc9
	9712bf2d29de5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   af68176d576dc       metrics-server-c59844bb4-gsd5w
	47c3ea3f2517e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   779b9dd0801e2       yakd-dashboard-5ddbf7d777-ljsqm
	63c5c0dcb78f9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   626e221da3537       local-path-provisioner-8d985888d-hkptp
	f9e5a5b781a69       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   d7d998ba525a6       storage-provisioner
	3491475c959d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   687420cda82ff       coredns-7db6d8ff4d-x6wn8
	3e2009a9b8f4f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        8 minutes ago       Running             kube-proxy                0                   b800364548168       kube-proxy-wc47p
	5b548a14c5e64       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        8 minutes ago       Running             kube-controller-manager   0                   5243863130f20       kube-controller-manager-addons-926744
	0ffe7b014e84d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   3ef4f4f68cc5b       etcd-addons-926744
	b62b083e9d8cd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        8 minutes ago       Running             kube-scheduler            0                   76e012f1fc0de       kube-scheduler-addons-926744
	d1b4710df7b69       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        8 minutes ago       Running             kube-apiserver            0                   8b0af8494513c       kube-apiserver-addons-926744
	
	
	==> coredns [3491475c959d7d0afa4ffb099d959f5d3faf2ec3077b63f3cb85867da842f846] <==
	[INFO] 10.244.0.8:46405 - 2599 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00069742s
	[INFO] 10.244.0.8:59632 - 55811 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008446s
	[INFO] 10.244.0.8:59632 - 16389 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046278s
	[INFO] 10.244.0.8:41084 - 40728 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103929s
	[INFO] 10.244.0.8:41084 - 25369 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054601s
	[INFO] 10.244.0.8:51579 - 59806 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097293s
	[INFO] 10.244.0.8:51579 - 39320 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053938s
	[INFO] 10.244.0.8:47113 - 63010 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072883s
	[INFO] 10.244.0.8:47113 - 38438 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040249s
	[INFO] 10.244.0.8:44584 - 27734 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082578s
	[INFO] 10.244.0.8:44584 - 3416 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073322s
	[INFO] 10.244.0.8:48219 - 62114 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030171s
	[INFO] 10.244.0.8:48219 - 38572 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026203s
	[INFO] 10.244.0.8:41993 - 56150 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029028s
	[INFO] 10.244.0.8:41993 - 17496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031722s
	[INFO] 10.244.0.22:43589 - 37743 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000220325s
	[INFO] 10.244.0.22:55505 - 8939 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181994s
	[INFO] 10.244.0.22:51042 - 65201 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159073s
	[INFO] 10.244.0.22:57129 - 19016 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159411s
	[INFO] 10.244.0.22:39124 - 18032 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090881s
	[INFO] 10.244.0.22:56252 - 24240 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067047s
	[INFO] 10.244.0.22:51039 - 44305 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000476548s
	[INFO] 10.244.0.22:38956 - 52100 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000605244s
	[INFO] 10.244.0.25:60113 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00157132s
	[INFO] 10.244.0.25:51687 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097573s
	
	
	==> describe nodes <==
	Name:               addons-926744
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-926744
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=addons-926744
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_39_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-926744
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:39:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-926744
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 10:47:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 10:45:16 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 10:45:16 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 10:45:16 +0000   Mon, 03 Jun 2024 10:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 10:45:16 +0000   Mon, 03 Jun 2024 10:39:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    addons-926744
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c36fa0042ae4fdaaa827e1bb0dda654
	  System UUID:                6c36fa00-42ae-4fda-aa82-7e1bb0dda654
	  Boot ID:                    76ff8c64-2020-47c1-945c-0f6fed458973
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-ksqv6          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  gcp-auth                    gcp-auth-5db96cd9b4-zspc9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  headlamp                    headlamp-68456f997b-7jxcw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 coredns-7db6d8ff4d-x6wn8                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m13s
	  kube-system                 etcd-addons-926744                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-apiserver-addons-926744              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-addons-926744     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-proxy-wc47p                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-scheduler-addons-926744              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  local-path-storage          local-path-provisioner-8d985888d-hkptp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-ljsqm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m8s   kube-proxy       
	  Normal  Starting                 8m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m27s  kubelet          Node addons-926744 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s  kubelet          Node addons-926744 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s  kubelet          Node addons-926744 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m26s  kubelet          Node addons-926744 status is now: NodeReady
	  Normal  RegisteredNode           8m14s  node-controller  Node addons-926744 event: Registered Node addons-926744 in Controller
	
	
	==> dmesg <==
	[Jun 3 10:40] kauditd_printk_skb: 117 callbacks suppressed
	[  +6.699088] kauditd_printk_skb: 90 callbacks suppressed
	[ +10.962568] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.875540] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.181729] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.770466] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.146041] kauditd_printk_skb: 9 callbacks suppressed
	[Jun 3 10:41] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.032288] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.415865] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.561939] kauditd_printk_skb: 11 callbacks suppressed
	[ +36.747947] kauditd_printk_skb: 45 callbacks suppressed
	[Jun 3 10:42] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.402673] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.676627] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.101924] kauditd_printk_skb: 58 callbacks suppressed
	[  +7.081833] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.193017] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.039997] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.121660] kauditd_printk_skb: 23 callbacks suppressed
	[Jun 3 10:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.291568] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.225720] kauditd_printk_skb: 33 callbacks suppressed
	[Jun 3 10:44] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.901298] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0ffe7b014e84dd57d7b5b9f2a08c606558cf0cbfb76ea9854262db7db62378b0] <==
	{"level":"warn","ts":"2024-06-03T10:41:25.948571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.417871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85553"}
	{"level":"info","ts":"2024-06-03T10:41:25.948696Z","caller":"traceutil/trace.go:171","msg":"trace[795847321] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1178; }","duration":"136.798984ms","start":"2024-06-03T10:41:25.811888Z","end":"2024-06-03T10:41:25.948687Z","steps":["trace[795847321] 'agreement among raft nodes before linearized reading'  (duration: 131.11305ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.234173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.227279ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3735894073187871367 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.188\" mod_revision:1274 > success:<request_put:<key:\"/registry/masterleases/192.168.39.188\" value_size:67 lease:3735894073187871365 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.188\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T10:42:19.23434Z","caller":"traceutil/trace.go:171","msg":"trace[55051779] linearizableReadLoop","detail":"{readStateIndex:1371; appliedIndex:1370; }","duration":"361.367807ms","start":"2024-06-03T10:42:18.872959Z","end":"2024-06-03T10:42:19.234327Z","steps":["trace[55051779] 'read index received'  (duration: 186.25985ms)","trace[55051779] 'applied index is now lower than readState.Index'  (duration: 175.106743ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T10:42:19.234413Z","caller":"traceutil/trace.go:171","msg":"trace[383761922] transaction","detail":"{read_only:false; response_revision:1322; number_of_response:1; }","duration":"478.029051ms","start":"2024-06-03T10:42:18.756377Z","end":"2024-06-03T10:42:19.234406Z","steps":["trace[383761922] 'process raft request'  (duration: 302.879922ms)","trace[383761922] 'compare'  (duration: 171.101588ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T10:42:19.234449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.756356Z","time spent":"478.072045ms","remote":"127.0.0.1:38444","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.188\" mod_revision:1274 > success:<request_put:<key:\"/registry/masterleases/192.168.39.188\" value_size:67 lease:3735894073187871365 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.188\" > >"}
	{"level":"warn","ts":"2024-06-03T10:42:19.234893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.918572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T10:42:19.234938Z","caller":"traceutil/trace.go:171","msg":"trace[1147656126] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1322; }","duration":"362.019511ms","start":"2024-06-03T10:42:18.872911Z","end":"2024-06-03T10:42:19.234931Z","steps":["trace[1147656126] 'agreement among raft nodes before linearized reading'  (duration: 361.789897ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.234963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.872898Z","time spent":"362.058529ms","remote":"127.0.0.1:38664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":13,"response size":29,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	{"level":"warn","ts":"2024-06-03T10:42:19.235146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.801009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-03T10:42:19.23526Z","caller":"traceutil/trace.go:171","msg":"trace[1351488970] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1322; }","duration":"330.938755ms","start":"2024-06-03T10:42:18.90431Z","end":"2024-06-03T10:42:19.235249Z","steps":["trace[1351488970] 'agreement among raft nodes before linearized reading'  (duration: 330.752892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237557Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:42:18.904297Z","time spent":"333.249021ms","remote":"127.0.0.1:38586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3988,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"warn","ts":"2024-06-03T10:42:19.237166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.536055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88197"}
	{"level":"info","ts":"2024-06-03T10:42:19.237649Z","caller":"traceutil/trace.go:171","msg":"trace[920582780] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1322; }","duration":"180.042208ms","start":"2024-06-03T10:42:19.057597Z","end":"2024-06-03T10:42:19.23764Z","steps":["trace[920582780] 'agreement among raft nodes before linearized reading'  (duration: 177.591445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.875085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T10:42:19.237748Z","caller":"traceutil/trace.go:171","msg":"trace[1755845588] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1322; }","duration":"168.377536ms","start":"2024-06-03T10:42:19.069364Z","end":"2024-06-03T10:42:19.237741Z","steps":["trace[1755845588] 'agreement among raft nodes before linearized reading'  (duration: 167.883806ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.346201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-03T10:42:19.23783Z","caller":"traceutil/trace.go:171","msg":"trace[2031929711] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1322; }","duration":"192.880008ms","start":"2024-06-03T10:42:19.044942Z","end":"2024-06-03T10:42:19.237822Z","steps":["trace[2031929711] 'agreement among raft nodes before linearized reading'  (duration: 192.331536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:42:19.237452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.967805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88197"}
	{"level":"info","ts":"2024-06-03T10:42:19.237912Z","caller":"traceutil/trace.go:171","msg":"trace[995480990] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1322; }","duration":"202.476713ms","start":"2024-06-03T10:42:19.03543Z","end":"2024-06-03T10:42:19.237907Z","steps":["trace[995480990] 'agreement among raft nodes before linearized reading'  (duration: 201.893407ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:01.609591Z","caller":"traceutil/trace.go:171","msg":"trace[1466340048] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"100.128518ms","start":"2024-06-03T10:43:01.50939Z","end":"2024-06-03T10:43:01.609518Z","steps":["trace[1466340048] 'process raft request'  (duration: 100.001045ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:04.376641Z","caller":"traceutil/trace.go:171","msg":"trace[30401139] linearizableReadLoop","detail":"{readStateIndex:1652; appliedIndex:1651; }","duration":"242.924429ms","start":"2024-06-03T10:43:04.133701Z","end":"2024-06-03T10:43:04.376626Z","steps":["trace[30401139] 'read index received'  (duration: 240.969761ms)","trace[30401139] 'applied index is now lower than readState.Index'  (duration: 1.953384ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T10:43:04.376838Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.109231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-06-03T10:43:04.376875Z","caller":"traceutil/trace.go:171","msg":"trace[1584292141] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1587; }","duration":"243.200249ms","start":"2024-06-03T10:43:04.133668Z","end":"2024-06-03T10:43:04.376869Z","steps":["trace[1584292141] 'agreement among raft nodes before linearized reading'  (duration: 243.033556ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:43:09.785731Z","caller":"traceutil/trace.go:171","msg":"trace[762528521] transaction","detail":"{read_only:false; response_revision:1597; number_of_response:1; }","duration":"143.161071ms","start":"2024-06-03T10:43:09.642553Z","end":"2024-06-03T10:43:09.785714Z","steps":["trace[762528521] 'process raft request'  (duration: 143.0512ms)"],"step_count":1}
	
	
	==> gcp-auth [2ca4f0b5927cee02233231241f88745c5e55ce32bb447642834460dc9dc4ddd3] <==
	2024/06/03 10:41:20 GCP Auth Webhook started!
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:12 Ready to marshal response ...
	2024/06/03 10:42:12 Ready to write response ...
	2024/06/03 10:42:17 Ready to marshal response ...
	2024/06/03 10:42:17 Ready to write response ...
	2024/06/03 10:42:22 Ready to marshal response ...
	2024/06/03 10:42:22 Ready to write response ...
	2024/06/03 10:42:26 Ready to marshal response ...
	2024/06/03 10:42:26 Ready to write response ...
	2024/06/03 10:42:37 Ready to marshal response ...
	2024/06/03 10:42:37 Ready to write response ...
	2024/06/03 10:42:37 Ready to marshal response ...
	2024/06/03 10:42:37 Ready to write response ...
	2024/06/03 10:42:50 Ready to marshal response ...
	2024/06/03 10:42:50 Ready to write response ...
	2024/06/03 10:42:57 Ready to marshal response ...
	2024/06/03 10:42:57 Ready to write response ...
	2024/06/03 10:43:21 Ready to marshal response ...
	2024/06/03 10:43:21 Ready to write response ...
	2024/06/03 10:44:46 Ready to marshal response ...
	2024/06/03 10:44:46 Ready to write response ...
	
	
	==> kernel <==
	 10:48:06 up 9 min,  0 users,  load average: 0.18, 0.81, 0.63
	Linux addons-926744 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d1b4710df7b696f0a0182430163fe8bc85b767ca0338a1f340a6c3676f9c4b5d] <==
	E0603 10:41:59.335492       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.152.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.152.243:443: connect: connection refused
	I0603 10:41:59.399709       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0603 10:42:12.833769       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.220.86"}
	I0603 10:42:19.239008       1 trace.go:236] Trace[1772787794]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.188,type:*v1.Endpoints,resource:apiServerIPInfo (03-Jun-2024 10:42:18.727) (total time: 511ms):
	Trace[1772787794]: ---"Txn call completed" 482ms (10:42:19.238)
	Trace[1772787794]: [511.825955ms] [511.825955ms] END
	I0603 10:42:26.202443       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0603 10:42:26.467009       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.99.57"}
	I0603 10:42:32.281625       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0603 10:42:33.315245       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0603 10:43:12.196124       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0603 10:43:38.252881       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.252939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.278762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.278824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.288219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.288544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.302199       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.302983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 10:43:38.324138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 10:43:38.324181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0603 10:43:39.301750       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0603 10:43:39.324780       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0603 10:43:39.328496       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0603 10:44:46.434734       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.179.199"}
	
	
	==> kube-controller-manager [5b548a14c5e6442eac3e36a22be12380807f80f6e2c4c320c60a68b4d7002d5a] <==
	W0603 10:46:03.804800       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:03.804829       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:12.550385       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:12.550451       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:22.134934       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:22.134999       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:44.894604       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:44.894686       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:52.655848       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:52.655946       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:53.472999       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:53.473076       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:46:57.059791       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:46:57.059879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:47:30.209675       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:47:30.209780       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:47:34.556681       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:47:34.556712       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:47:36.947682       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:47:36.947727       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:47:42.953391       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:47:42.953492       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 10:48:04.753482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.749µs"
	W0603 10:48:05.521877       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:48:05.521911       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [3e2009a9b8f4ff5b5edff04f22d53bb8c740dd0efe3373e23157d24615e24e56] <==
	I0603 10:39:57.159594       1 server_linux.go:69] "Using iptables proxy"
	I0603 10:39:57.218281       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.188"]
	I0603 10:39:57.342906       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 10:39:57.342963       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 10:39:57.342982       1 server_linux.go:165] "Using iptables Proxier"
	I0603 10:39:57.346598       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 10:39:57.346751       1 server.go:872] "Version info" version="v1.30.1"
	I0603 10:39:57.346782       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 10:39:57.348476       1 config.go:192] "Starting service config controller"
	I0603 10:39:57.348510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 10:39:57.348529       1 config.go:101] "Starting endpoint slice config controller"
	I0603 10:39:57.348533       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 10:39:57.348925       1 config.go:319] "Starting node config controller"
	I0603 10:39:57.348949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 10:39:57.449606       1 shared_informer.go:320] Caches are synced for node config
	I0603 10:39:57.449650       1 shared_informer.go:320] Caches are synced for service config
	I0603 10:39:57.449676       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b62b083e9d8cd1f295f9048fe1ad2f9efbcbb77ae70ee9e952bc18d63662d708] <==
	W0603 10:39:37.138101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 10:39:37.138131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 10:39:37.970105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 10:39:37.970216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 10:39:37.989275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 10:39:37.989368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 10:39:38.013775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 10:39:38.013860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 10:39:38.045268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 10:39:38.045425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 10:39:38.057787       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 10:39:38.057830       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 10:39:38.211494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 10:39:38.211539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 10:39:38.233824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 10:39:38.233848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 10:39:38.235427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 10:39:38.235464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 10:39:38.314538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 10:39:38.314656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 10:39:38.419256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 10:39:38.419305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 10:39:38.431515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 10:39:38.431560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 10:39:41.011852       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 10:44:53 addons-926744 kubelet[1270]: I0603 10:44:53.462125    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a014780-a43d-46a5-9cca-7929e5385a64" path="/var/lib/kubelet/pods/3a014780-a43d-46a5-9cca-7929e5385a64/volumes"
	Jun 03 10:45:39 addons-926744 kubelet[1270]: E0603 10:45:39.470749    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 10:45:39 addons-926744 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 10:45:39 addons-926744 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 10:45:39 addons-926744 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 10:45:39 addons-926744 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 10:45:39 addons-926744 kubelet[1270]: I0603 10:45:39.996233    1270 scope.go:117] "RemoveContainer" containerID="b59b71a1d26052c219e71e24e321e98f0ac5b95a20562143df8a91fb69e2eeb2"
	Jun 03 10:45:40 addons-926744 kubelet[1270]: I0603 10:45:40.015567    1270 scope.go:117] "RemoveContainer" containerID="6587ac85191b0c6c3ffb405d2196476db864e96635b8871d5c9f8dcca04fe28c"
	Jun 03 10:46:39 addons-926744 kubelet[1270]: E0603 10:46:39.471666    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 10:46:39 addons-926744 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 10:46:39 addons-926744 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 10:46:39 addons-926744 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 10:46:39 addons-926744 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 10:47:39 addons-926744 kubelet[1270]: E0603 10:47:39.470524    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 10:47:39 addons-926744 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 10:47:39 addons-926744 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 10:47:39 addons-926744 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 10:47:39 addons-926744 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 10:48:04 addons-926744 kubelet[1270]: I0603 10:48:04.774943    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-ksqv6" podStartSLOduration=195.542960456 podStartE2EDuration="3m18.774909702s" podCreationTimestamp="2024-06-03 10:44:46 +0000 UTC" firstStartedPulling="2024-06-03 10:44:46.841688099 +0000 UTC m=+307.496545527" lastFinishedPulling="2024-06-03 10:44:50.073637347 +0000 UTC m=+310.728494773" observedRunningTime="2024-06-03 10:44:50.231502582 +0000 UTC m=+310.886360029" watchObservedRunningTime="2024-06-03 10:48:04.774909702 +0000 UTC m=+505.429767149"
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.142500    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23f016d5-3265-4e2c-abb2-940fc0259aab-tmp-dir\") pod \"23f016d5-3265-4e2c-abb2-940fc0259aab\" (UID: \"23f016d5-3265-4e2c-abb2-940fc0259aab\") "
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.142541    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwdcb\" (UniqueName: \"kubernetes.io/projected/23f016d5-3265-4e2c-abb2-940fc0259aab-kube-api-access-nwdcb\") pod \"23f016d5-3265-4e2c-abb2-940fc0259aab\" (UID: \"23f016d5-3265-4e2c-abb2-940fc0259aab\") "
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.143251    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f016d5-3265-4e2c-abb2-940fc0259aab-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "23f016d5-3265-4e2c-abb2-940fc0259aab" (UID: "23f016d5-3265-4e2c-abb2-940fc0259aab"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.146247    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f016d5-3265-4e2c-abb2-940fc0259aab-kube-api-access-nwdcb" (OuterVolumeSpecName: "kube-api-access-nwdcb") pod "23f016d5-3265-4e2c-abb2-940fc0259aab" (UID: "23f016d5-3265-4e2c-abb2-940fc0259aab"). InnerVolumeSpecName "kube-api-access-nwdcb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.243181    1270 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23f016d5-3265-4e2c-abb2-940fc0259aab-tmp-dir\") on node \"addons-926744\" DevicePath \"\""
	Jun 03 10:48:06 addons-926744 kubelet[1270]: I0603 10:48:06.243333    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nwdcb\" (UniqueName: \"kubernetes.io/projected/23f016d5-3265-4e2c-abb2-940fc0259aab-kube-api-access-nwdcb\") on node \"addons-926744\" DevicePath \"\""
	
	
	==> storage-provisioner [f9e5a5b781a69a3b32df93f84eb0fc18139e277d5ad479870e300572b5f172bf] <==
	I0603 10:40:00.886486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 10:40:00.957891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 10:40:00.957949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 10:40:01.007501       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 10:40:01.010283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d!
	I0603 10:40:01.011519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4090779-c75e-41e7-abc0-7ccc633724ea", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d became leader
	I0603 10:40:01.110809       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-926744_be180db3-27f5-4c78-9b94-ecce56b7f69d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-926744 -n addons-926744
helpers_test.go:261: (dbg) Run:  kubectl --context addons-926744 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-gsd5w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-926744 describe pod metrics-server-c59844bb4-gsd5w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-926744 describe pod metrics-server-c59844bb4-gsd5w: exit status 1 (60.609928ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-gsd5w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-926744 describe pod metrics-server-c59844bb4-gsd5w: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (349.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-926744
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-926744: exit status 82 (2m0.457646636s)

                                                
                                                
-- stdout --
	* Stopping node "addons-926744"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-926744" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-926744
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-926744: exit status 11 (21.592134937s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.188:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-926744" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-926744
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-926744: exit status 11 (6.144503822s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.188:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-926744" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-926744
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-926744: exit status 11 (6.142208338s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.188:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-926744" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 node stop m02 -v=7 --alsologtostderr
E0603 11:02:12.037427   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:03:03.058076   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466560511s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683480-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:01:55.164835   29819 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:01:55.165075   29819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:01:55.165083   29819 out.go:304] Setting ErrFile to fd 2...
	I0603 11:01:55.165087   29819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:01:55.165248   29819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:01:55.165476   29819 mustload.go:65] Loading cluster: ha-683480
	I0603 11:01:55.165816   29819 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:01:55.165830   29819 stop.go:39] StopHost: ha-683480-m02
	I0603 11:01:55.166163   29819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:01:55.166207   29819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:01:55.183089   29819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0603 11:01:55.183517   29819 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:01:55.184027   29819 main.go:141] libmachine: Using API Version  1
	I0603 11:01:55.184051   29819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:01:55.184419   29819 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:01:55.186746   29819 out.go:177] * Stopping node "ha-683480-m02"  ...
	I0603 11:01:55.187906   29819 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:01:55.187941   29819 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:01:55.188173   29819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:01:55.188213   29819 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:01:55.191178   29819 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:01:55.191612   29819 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:01:55.191652   29819 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:01:55.191782   29819 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:01:55.192103   29819 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:01:55.192278   29819 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:01:55.192432   29819 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 11:01:55.280223   29819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:01:55.335284   29819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:01:55.390046   29819 main.go:141] libmachine: Stopping "ha-683480-m02"...
	I0603 11:01:55.390101   29819 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:01:55.391679   29819 main.go:141] libmachine: (ha-683480-m02) Calling .Stop
	I0603 11:01:55.395218   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 0/120
	I0603 11:01:56.396551   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 1/120
	I0603 11:01:57.398541   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 2/120
	I0603 11:01:58.399841   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 3/120
	I0603 11:01:59.401473   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 4/120
	I0603 11:02:00.403284   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 5/120
	I0603 11:02:01.405467   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 6/120
	I0603 11:02:02.407027   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 7/120
	I0603 11:02:03.408402   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 8/120
	I0603 11:02:04.409553   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 9/120
	I0603 11:02:05.411736   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 10/120
	I0603 11:02:06.413057   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 11/120
	I0603 11:02:07.414700   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 12/120
	I0603 11:02:08.416624   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 13/120
	I0603 11:02:09.417910   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 14/120
	I0603 11:02:10.419881   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 15/120
	I0603 11:02:11.421567   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 16/120
	I0603 11:02:12.422867   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 17/120
	I0603 11:02:13.424034   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 18/120
	I0603 11:02:14.425405   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 19/120
	I0603 11:02:15.427572   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 20/120
	I0603 11:02:16.429514   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 21/120
	I0603 11:02:17.430662   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 22/120
	I0603 11:02:18.431922   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 23/120
	I0603 11:02:19.433487   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 24/120
	I0603 11:02:20.435066   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 25/120
	I0603 11:02:21.436433   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 26/120
	I0603 11:02:22.437802   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 27/120
	I0603 11:02:23.439195   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 28/120
	I0603 11:02:24.441504   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 29/120
	I0603 11:02:25.442978   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 30/120
	I0603 11:02:26.444369   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 31/120
	I0603 11:02:27.445912   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 32/120
	I0603 11:02:28.447324   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 33/120
	I0603 11:02:29.449480   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 34/120
	I0603 11:02:30.450925   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 35/120
	I0603 11:02:31.452053   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 36/120
	I0603 11:02:32.453172   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 37/120
	I0603 11:02:33.454471   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 38/120
	I0603 11:02:34.455680   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 39/120
	I0603 11:02:35.457578   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 40/120
	I0603 11:02:36.459253   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 41/120
	I0603 11:02:37.460605   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 42/120
	I0603 11:02:38.461933   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 43/120
	I0603 11:02:39.463373   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 44/120
	I0603 11:02:40.465159   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 45/120
	I0603 11:02:41.466453   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 46/120
	I0603 11:02:42.468681   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 47/120
	I0603 11:02:43.469856   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 48/120
	I0603 11:02:44.471198   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 49/120
	I0603 11:02:45.473299   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 50/120
	I0603 11:02:46.474528   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 51/120
	I0603 11:02:47.475932   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 52/120
	I0603 11:02:48.477337   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 53/120
	I0603 11:02:49.479126   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 54/120
	I0603 11:02:50.481043   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 55/120
	I0603 11:02:51.482430   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 56/120
	I0603 11:02:52.484318   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 57/120
	I0603 11:02:53.486403   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 58/120
	I0603 11:02:54.488604   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 59/120
	I0603 11:02:55.490696   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 60/120
	I0603 11:02:56.491808   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 61/120
	I0603 11:02:57.493409   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 62/120
	I0603 11:02:58.494763   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 63/120
	I0603 11:02:59.496798   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 64/120
	I0603 11:03:00.498634   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 65/120
	I0603 11:03:01.500166   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 66/120
	I0603 11:03:02.502230   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 67/120
	I0603 11:03:03.503525   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 68/120
	I0603 11:03:04.505333   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 69/120
	I0603 11:03:05.507265   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 70/120
	I0603 11:03:06.509517   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 71/120
	I0603 11:03:07.511418   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 72/120
	I0603 11:03:08.513520   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 73/120
	I0603 11:03:09.514765   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 74/120
	I0603 11:03:10.516524   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 75/120
	I0603 11:03:11.517832   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 76/120
	I0603 11:03:12.519090   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 77/120
	I0603 11:03:13.521252   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 78/120
	I0603 11:03:14.522765   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 79/120
	I0603 11:03:15.524981   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 80/120
	I0603 11:03:16.526460   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 81/120
	I0603 11:03:17.527789   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 82/120
	I0603 11:03:18.529428   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 83/120
	I0603 11:03:19.531696   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 84/120
	I0603 11:03:20.533703   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 85/120
	I0603 11:03:21.535006   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 86/120
	I0603 11:03:22.536314   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 87/120
	I0603 11:03:23.537656   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 88/120
	I0603 11:03:24.538939   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 89/120
	I0603 11:03:25.540988   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 90/120
	I0603 11:03:26.542393   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 91/120
	I0603 11:03:27.543867   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 92/120
	I0603 11:03:28.545200   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 93/120
	I0603 11:03:29.546772   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 94/120
	I0603 11:03:30.548329   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 95/120
	I0603 11:03:31.549616   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 96/120
	I0603 11:03:32.551989   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 97/120
	I0603 11:03:33.554283   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 98/120
	I0603 11:03:34.556312   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 99/120
	I0603 11:03:35.558467   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 100/120
	I0603 11:03:36.559740   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 101/120
	I0603 11:03:37.561576   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 102/120
	I0603 11:03:38.563716   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 103/120
	I0603 11:03:39.565564   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 104/120
	I0603 11:03:40.567098   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 105/120
	I0603 11:03:41.568542   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 106/120
	I0603 11:03:42.569821   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 107/120
	I0603 11:03:43.571238   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 108/120
	I0603 11:03:44.572642   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 109/120
	I0603 11:03:45.574679   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 110/120
	I0603 11:03:46.576379   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 111/120
	I0603 11:03:47.577811   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 112/120
	I0603 11:03:48.578997   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 113/120
	I0603 11:03:49.580258   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 114/120
	I0603 11:03:50.581464   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 115/120
	I0603 11:03:51.583392   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 116/120
	I0603 11:03:52.585341   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 117/120
	I0603 11:03:53.586690   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 118/120
	I0603 11:03:54.588735   29819 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 119/120
	I0603 11:03:55.589396   29819 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 11:03:55.589540   29819 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-683480 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (19.149953068s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:03:55.632840   30250 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:03:55.632966   30250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:03:55.632985   30250 out.go:304] Setting ErrFile to fd 2...
	I0603 11:03:55.632991   30250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:03:55.633335   30250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:03:55.633567   30250 out.go:298] Setting JSON to false
	I0603 11:03:55.633585   30250 mustload.go:65] Loading cluster: ha-683480
	I0603 11:03:55.633699   30250 notify.go:220] Checking for updates...
	I0603 11:03:55.634386   30250 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:03:55.634407   30250 status.go:255] checking status of ha-683480 ...
	I0603 11:03:55.634793   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.634846   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.654192   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0603 11:03:55.654615   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.655256   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.655277   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.655629   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.655863   30250 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:03:55.657374   30250 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:03:55.657387   30250 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:03:55.657654   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.657691   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.671875   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0603 11:03:55.672281   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.672725   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.672747   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.673072   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.673246   30250 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:03:55.676331   30250 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:03:55.676798   30250 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:03:55.676829   30250 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:03:55.676982   30250 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:03:55.677311   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.677353   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.691653   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I0603 11:03:55.692155   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.692662   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.692695   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.692983   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.693155   30250 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:03:55.693371   30250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:03:55.693408   30250 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:03:55.696118   30250 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:03:55.696490   30250 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:03:55.696515   30250 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:03:55.696654   30250 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:03:55.696794   30250 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:03:55.696975   30250 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:03:55.697123   30250 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:03:55.776999   30250 ssh_runner.go:195] Run: systemctl --version
	I0603 11:03:55.784252   30250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:03:55.807446   30250 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:03:55.807480   30250 api_server.go:166] Checking apiserver status ...
	I0603 11:03:55.807518   30250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:03:55.824252   30250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:03:55.834623   30250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:03:55.834670   30250 ssh_runner.go:195] Run: ls
	I0603 11:03:55.839565   30250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:03:55.843724   30250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:03:55.843741   30250 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:03:55.843750   30250 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:03:55.843764   30250 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:03:55.844021   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.844057   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.859548   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0603 11:03:55.859993   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.860511   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.860538   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.860827   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.860998   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:03:55.862492   30250 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:03:55.862509   30250 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:03:55.862807   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.862839   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.877022   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0603 11:03:55.877368   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.877739   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.877766   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.878116   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.878297   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:03:55.881065   30250 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:03:55.881468   30250 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:03:55.881500   30250 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:03:55.881639   30250 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:03:55.881934   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:03:55.881967   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:03:55.896597   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0603 11:03:55.896999   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:03:55.897444   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:03:55.897463   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:03:55.897778   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:03:55.897950   30250 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:03:55.898119   30250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:03:55.898137   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:03:55.900918   30250 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:03:55.901349   30250 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:03:55.901376   30250 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:03:55.901508   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:03:55.901634   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:03:55.901780   30250 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:03:55.901934   30250 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:14.383299   30250 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:14.383404   30250 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:14.383423   30250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:14.383434   30250 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:14.383456   30250 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:14.383465   30250 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:14.383912   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.383979   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.399703   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0603 11:04:14.400133   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.400635   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.400658   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.400995   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.401191   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:14.402685   30250 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:14.402700   30250 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:14.402984   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.403014   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.417049   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0603 11:04:14.417479   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.418000   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.418024   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.418353   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.418537   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:14.421347   30250 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:14.421835   30250 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:14.421866   30250 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:14.422158   30250 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:14.422571   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.422618   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.438869   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0603 11:04:14.439333   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.440020   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.440048   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.440371   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.440733   30250 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:14.440952   30250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:14.440980   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:14.444100   30250 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:14.444556   30250 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:14.444590   30250 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:14.444727   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:14.444908   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:14.445054   30250 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:14.445206   30250 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:14.528722   30250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:14.545765   30250 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:14.545789   30250 api_server.go:166] Checking apiserver status ...
	I0603 11:04:14.545821   30250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:14.560336   30250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:14.569232   30250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:14.569280   30250 ssh_runner.go:195] Run: ls
	I0603 11:04:14.574178   30250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:14.580935   30250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:14.580958   30250 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:14.580969   30250 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:14.580992   30250 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:14.581402   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.581443   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.595792   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0603 11:04:14.596160   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.596647   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.596667   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.596932   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.597131   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:14.598485   30250 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:14.598499   30250 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:14.598779   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.598808   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.614365   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0603 11:04:14.614717   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.615177   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.615195   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.615546   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.615728   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:14.618042   30250 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:14.618482   30250 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:14.618502   30250 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:14.618657   30250 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:14.618972   30250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:14.619010   30250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:14.633911   30250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0603 11:04:14.634248   30250 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:14.634716   30250 main.go:141] libmachine: Using API Version  1
	I0603 11:04:14.634736   30250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:14.635064   30250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:14.635230   30250 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:14.635465   30250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:14.635484   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:14.637780   30250 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:14.638170   30250 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:14.638191   30250 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:14.638365   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:14.638536   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:14.638669   30250 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:14.638797   30250 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:14.724287   30250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:14.740072   30250 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 logs -n 25: (1.433237837s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m03_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m04 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp testdata/cp-test.txt                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m03 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683480 node stop m02 -v=7                                                     | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:56:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:56:14.465928   25542 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:56:14.466039   25542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:56:14.466047   25542 out.go:304] Setting ErrFile to fd 2...
	I0603 10:56:14.466051   25542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:56:14.466228   25542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:56:14.466732   25542 out.go:298] Setting JSON to false
	I0603 10:56:14.467577   25542 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2319,"bootTime":1717409855,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:56:14.467637   25542 start.go:139] virtualization: kvm guest
	I0603 10:56:14.469737   25542 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:56:14.471061   25542 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:56:14.471060   25542 notify.go:220] Checking for updates...
	I0603 10:56:14.472444   25542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:56:14.473752   25542 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:56:14.474992   25542 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.476189   25542 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:56:14.477378   25542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:56:14.478650   25542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:56:14.511229   25542 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 10:56:14.512537   25542 start.go:297] selected driver: kvm2
	I0603 10:56:14.512557   25542 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:56:14.512567   25542 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:56:14.513197   25542 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:56:14.513254   25542 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:56:14.526768   25542 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:56:14.526811   25542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:56:14.526980   25542 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:56:14.527003   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:56:14.527009   25542 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 10:56:14.527018   25542 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 10:56:14.527108   25542 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0603 10:56:14.527207   25542 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:56:14.528755   25542 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 10:56:14.529843   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:56:14.529864   25542 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 10:56:14.529872   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:56:14.529928   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:56:14.529938   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:56:14.530229   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:56:14.530249   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json: {Name:mk0c15c4828c27d5c6cc73cead395c2c3f3ae011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:14.530365   25542 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:56:14.530391   25542 start.go:364] duration metric: took 14.272µs to acquireMachinesLock for "ha-683480"
	I0603 10:56:14.530403   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:56:14.530452   25542 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 10:56:14.531975   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:56:14.532077   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:56:14.532120   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:56:14.545552   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I0603 10:56:14.545888   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:56:14.546453   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:56:14.546473   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:56:14.546759   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:56:14.546938   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:14.547090   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:14.547240   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:56:14.547263   25542 client.go:168] LocalClient.Create starting
	I0603 10:56:14.547290   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:56:14.547315   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:56:14.547328   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:56:14.547370   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:56:14.547386   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:56:14.547395   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:56:14.547418   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:56:14.547428   25542 main.go:141] libmachine: (ha-683480) Calling .PreCreateCheck
	I0603 10:56:14.547774   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:14.548111   25542 main.go:141] libmachine: Creating machine...
	I0603 10:56:14.548122   25542 main.go:141] libmachine: (ha-683480) Calling .Create
	I0603 10:56:14.548229   25542 main.go:141] libmachine: (ha-683480) Creating KVM machine...
	I0603 10:56:14.549209   25542 main.go:141] libmachine: (ha-683480) DBG | found existing default KVM network
	I0603 10:56:14.549753   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.549647   25565 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0603 10:56:14.549774   25542 main.go:141] libmachine: (ha-683480) DBG | created network xml: 
	I0603 10:56:14.549787   25542 main.go:141] libmachine: (ha-683480) DBG | <network>
	I0603 10:56:14.549793   25542 main.go:141] libmachine: (ha-683480) DBG |   <name>mk-ha-683480</name>
	I0603 10:56:14.549797   25542 main.go:141] libmachine: (ha-683480) DBG |   <dns enable='no'/>
	I0603 10:56:14.549802   25542 main.go:141] libmachine: (ha-683480) DBG |   
	I0603 10:56:14.549807   25542 main.go:141] libmachine: (ha-683480) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 10:56:14.549813   25542 main.go:141] libmachine: (ha-683480) DBG |     <dhcp>
	I0603 10:56:14.549818   25542 main.go:141] libmachine: (ha-683480) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 10:56:14.549834   25542 main.go:141] libmachine: (ha-683480) DBG |     </dhcp>
	I0603 10:56:14.549843   25542 main.go:141] libmachine: (ha-683480) DBG |   </ip>
	I0603 10:56:14.549849   25542 main.go:141] libmachine: (ha-683480) DBG |   
	I0603 10:56:14.549863   25542 main.go:141] libmachine: (ha-683480) DBG | </network>
	I0603 10:56:14.549872   25542 main.go:141] libmachine: (ha-683480) DBG | 
	I0603 10:56:14.554395   25542 main.go:141] libmachine: (ha-683480) DBG | trying to create private KVM network mk-ha-683480 192.168.39.0/24...
	I0603 10:56:14.616093   25542 main.go:141] libmachine: (ha-683480) DBG | private KVM network mk-ha-683480 192.168.39.0/24 created
	I0603 10:56:14.616122   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.616072   25565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.616133   25542 main.go:141] libmachine: (ha-683480) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 ...
	I0603 10:56:14.616148   25542 main.go:141] libmachine: (ha-683480) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:56:14.616279   25542 main.go:141] libmachine: (ha-683480) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:56:14.843163   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.843021   25565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa...
	I0603 10:56:14.951771   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.951623   25565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/ha-683480.rawdisk...
	I0603 10:56:14.951807   25542 main.go:141] libmachine: (ha-683480) DBG | Writing magic tar header
	I0603 10:56:14.951822   25542 main.go:141] libmachine: (ha-683480) DBG | Writing SSH key tar header
	I0603 10:56:14.951834   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.951781   25565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 ...
	I0603 10:56:14.951866   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480
	I0603 10:56:14.951892   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:56:14.951909   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 (perms=drwx------)
	I0603 10:56:14.951919   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.951933   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:56:14.951942   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:56:14.951952   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:56:14.951963   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home
	I0603 10:56:14.951975   25542 main.go:141] libmachine: (ha-683480) DBG | Skipping /home - not owner
	I0603 10:56:14.951990   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:56:14.952050   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:56:14.952070   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:56:14.952082   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:56:14.952095   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:56:14.952107   25542 main.go:141] libmachine: (ha-683480) Creating domain...
	I0603 10:56:14.953008   25542 main.go:141] libmachine: (ha-683480) define libvirt domain using xml: 
	I0603 10:56:14.953038   25542 main.go:141] libmachine: (ha-683480) <domain type='kvm'>
	I0603 10:56:14.953047   25542 main.go:141] libmachine: (ha-683480)   <name>ha-683480</name>
	I0603 10:56:14.953058   25542 main.go:141] libmachine: (ha-683480)   <memory unit='MiB'>2200</memory>
	I0603 10:56:14.953066   25542 main.go:141] libmachine: (ha-683480)   <vcpu>2</vcpu>
	I0603 10:56:14.953070   25542 main.go:141] libmachine: (ha-683480)   <features>
	I0603 10:56:14.953075   25542 main.go:141] libmachine: (ha-683480)     <acpi/>
	I0603 10:56:14.953079   25542 main.go:141] libmachine: (ha-683480)     <apic/>
	I0603 10:56:14.953084   25542 main.go:141] libmachine: (ha-683480)     <pae/>
	I0603 10:56:14.953092   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953099   25542 main.go:141] libmachine: (ha-683480)   </features>
	I0603 10:56:14.953103   25542 main.go:141] libmachine: (ha-683480)   <cpu mode='host-passthrough'>
	I0603 10:56:14.953107   25542 main.go:141] libmachine: (ha-683480)   
	I0603 10:56:14.953111   25542 main.go:141] libmachine: (ha-683480)   </cpu>
	I0603 10:56:14.953116   25542 main.go:141] libmachine: (ha-683480)   <os>
	I0603 10:56:14.953120   25542 main.go:141] libmachine: (ha-683480)     <type>hvm</type>
	I0603 10:56:14.953127   25542 main.go:141] libmachine: (ha-683480)     <boot dev='cdrom'/>
	I0603 10:56:14.953131   25542 main.go:141] libmachine: (ha-683480)     <boot dev='hd'/>
	I0603 10:56:14.953138   25542 main.go:141] libmachine: (ha-683480)     <bootmenu enable='no'/>
	I0603 10:56:14.953142   25542 main.go:141] libmachine: (ha-683480)   </os>
	I0603 10:56:14.953147   25542 main.go:141] libmachine: (ha-683480)   <devices>
	I0603 10:56:14.953158   25542 main.go:141] libmachine: (ha-683480)     <disk type='file' device='cdrom'>
	I0603 10:56:14.953165   25542 main.go:141] libmachine: (ha-683480)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/boot2docker.iso'/>
	I0603 10:56:14.953171   25542 main.go:141] libmachine: (ha-683480)       <target dev='hdc' bus='scsi'/>
	I0603 10:56:14.953177   25542 main.go:141] libmachine: (ha-683480)       <readonly/>
	I0603 10:56:14.953183   25542 main.go:141] libmachine: (ha-683480)     </disk>
	I0603 10:56:14.953189   25542 main.go:141] libmachine: (ha-683480)     <disk type='file' device='disk'>
	I0603 10:56:14.953199   25542 main.go:141] libmachine: (ha-683480)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:56:14.953208   25542 main.go:141] libmachine: (ha-683480)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/ha-683480.rawdisk'/>
	I0603 10:56:14.953215   25542 main.go:141] libmachine: (ha-683480)       <target dev='hda' bus='virtio'/>
	I0603 10:56:14.953220   25542 main.go:141] libmachine: (ha-683480)     </disk>
	I0603 10:56:14.953225   25542 main.go:141] libmachine: (ha-683480)     <interface type='network'>
	I0603 10:56:14.953231   25542 main.go:141] libmachine: (ha-683480)       <source network='mk-ha-683480'/>
	I0603 10:56:14.953241   25542 main.go:141] libmachine: (ha-683480)       <model type='virtio'/>
	I0603 10:56:14.953246   25542 main.go:141] libmachine: (ha-683480)     </interface>
	I0603 10:56:14.953255   25542 main.go:141] libmachine: (ha-683480)     <interface type='network'>
	I0603 10:56:14.953261   25542 main.go:141] libmachine: (ha-683480)       <source network='default'/>
	I0603 10:56:14.953273   25542 main.go:141] libmachine: (ha-683480)       <model type='virtio'/>
	I0603 10:56:14.953281   25542 main.go:141] libmachine: (ha-683480)     </interface>
	I0603 10:56:14.953285   25542 main.go:141] libmachine: (ha-683480)     <serial type='pty'>
	I0603 10:56:14.953297   25542 main.go:141] libmachine: (ha-683480)       <target port='0'/>
	I0603 10:56:14.953302   25542 main.go:141] libmachine: (ha-683480)     </serial>
	I0603 10:56:14.953307   25542 main.go:141] libmachine: (ha-683480)     <console type='pty'>
	I0603 10:56:14.953314   25542 main.go:141] libmachine: (ha-683480)       <target type='serial' port='0'/>
	I0603 10:56:14.953321   25542 main.go:141] libmachine: (ha-683480)     </console>
	I0603 10:56:14.953328   25542 main.go:141] libmachine: (ha-683480)     <rng model='virtio'>
	I0603 10:56:14.953334   25542 main.go:141] libmachine: (ha-683480)       <backend model='random'>/dev/random</backend>
	I0603 10:56:14.953339   25542 main.go:141] libmachine: (ha-683480)     </rng>
	I0603 10:56:14.953345   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953354   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953359   25542 main.go:141] libmachine: (ha-683480)   </devices>
	I0603 10:56:14.953368   25542 main.go:141] libmachine: (ha-683480) </domain>
	I0603 10:56:14.953380   25542 main.go:141] libmachine: (ha-683480) 
	I0603 10:56:14.957670   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:2d:ce:50 in network default
	I0603 10:56:14.958244   25542 main.go:141] libmachine: (ha-683480) Ensuring networks are active...
	I0603 10:56:14.958260   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:14.958904   25542 main.go:141] libmachine: (ha-683480) Ensuring network default is active
	I0603 10:56:14.959395   25542 main.go:141] libmachine: (ha-683480) Ensuring network mk-ha-683480 is active
	I0603 10:56:14.959879   25542 main.go:141] libmachine: (ha-683480) Getting domain xml...
	I0603 10:56:14.960577   25542 main.go:141] libmachine: (ha-683480) Creating domain...
	I0603 10:56:16.122048   25542 main.go:141] libmachine: (ha-683480) Waiting to get IP...
	I0603 10:56:16.122806   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.123253   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.123298   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.123236   25565 retry.go:31] will retry after 285.048907ms: waiting for machine to come up
	I0603 10:56:16.409805   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.410165   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.410203   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.410143   25565 retry.go:31] will retry after 257.029676ms: waiting for machine to come up
	I0603 10:56:16.668480   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.668955   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.668994   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.668907   25565 retry.go:31] will retry after 364.079168ms: waiting for machine to come up
	I0603 10:56:17.034445   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:17.034807   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:17.034831   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:17.034761   25565 retry.go:31] will retry after 368.572252ms: waiting for machine to come up
	I0603 10:56:17.405421   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:17.405973   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:17.406014   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:17.405926   25565 retry.go:31] will retry after 654.377154ms: waiting for machine to come up
	I0603 10:56:18.062010   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:18.062406   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:18.062443   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:18.062376   25565 retry.go:31] will retry after 945.231342ms: waiting for machine to come up
	I0603 10:56:19.009418   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:19.009809   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:19.009856   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:19.009762   25565 retry.go:31] will retry after 950.938623ms: waiting for machine to come up
	I0603 10:56:19.962347   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:19.962771   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:19.962792   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:19.962742   25565 retry.go:31] will retry after 926.994312ms: waiting for machine to come up
	I0603 10:56:20.891027   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:20.891482   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:20.891503   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:20.891434   25565 retry.go:31] will retry after 1.168197229s: waiting for machine to come up
	I0603 10:56:22.061741   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:22.062117   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:22.062148   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:22.062087   25565 retry.go:31] will retry after 2.194197242s: waiting for machine to come up
	I0603 10:56:24.259388   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:24.259830   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:24.259845   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:24.259810   25565 retry.go:31] will retry after 2.004867849s: waiting for machine to come up
	I0603 10:56:26.266608   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:26.266992   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:26.267013   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:26.266945   25565 retry.go:31] will retry after 2.227676044s: waiting for machine to come up
	I0603 10:56:28.497291   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:28.497708   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:28.497730   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:28.497657   25565 retry.go:31] will retry after 4.28187111s: waiting for machine to come up
	I0603 10:56:32.783402   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:32.783871   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:32.783891   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:32.783837   25565 retry.go:31] will retry after 5.257653046s: waiting for machine to come up
	I0603 10:56:38.047163   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.047562   25542 main.go:141] libmachine: (ha-683480) Found IP for machine: 192.168.39.116
	I0603 10:56:38.047579   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has current primary IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.047585   25542 main.go:141] libmachine: (ha-683480) Reserving static IP address...
	I0603 10:56:38.047902   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find host DHCP lease matching {name: "ha-683480", mac: "52:54:00:e5:3f:6a", ip: "192.168.39.116"} in network mk-ha-683480
	I0603 10:56:38.115112   25542 main.go:141] libmachine: (ha-683480) Reserved static IP address: 192.168.39.116
	I0603 10:56:38.115143   25542 main.go:141] libmachine: (ha-683480) Waiting for SSH to be available...
	I0603 10:56:38.115167   25542 main.go:141] libmachine: (ha-683480) DBG | Getting to WaitForSSH function...
	I0603 10:56:38.117475   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.117779   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480
	I0603 10:56:38.117810   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find defined IP address of network mk-ha-683480 interface with MAC address 52:54:00:e5:3f:6a
	I0603 10:56:38.117870   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH client type: external
	I0603 10:56:38.117896   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa (-rw-------)
	I0603 10:56:38.117945   25542 main.go:141] libmachine: (ha-683480) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:56:38.117960   25542 main.go:141] libmachine: (ha-683480) DBG | About to run SSH command:
	I0603 10:56:38.117983   25542 main.go:141] libmachine: (ha-683480) DBG | exit 0
	I0603 10:56:38.121504   25542 main.go:141] libmachine: (ha-683480) DBG | SSH cmd err, output: exit status 255: 
	I0603 10:56:38.121525   25542 main.go:141] libmachine: (ha-683480) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0603 10:56:38.121532   25542 main.go:141] libmachine: (ha-683480) DBG | command : exit 0
	I0603 10:56:38.121541   25542 main.go:141] libmachine: (ha-683480) DBG | err     : exit status 255
	I0603 10:56:38.121547   25542 main.go:141] libmachine: (ha-683480) DBG | output  : 
	I0603 10:56:41.123142   25542 main.go:141] libmachine: (ha-683480) DBG | Getting to WaitForSSH function...
	I0603 10:56:41.125379   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.125739   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.125762   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.125889   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH client type: external
	I0603 10:56:41.125919   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa (-rw-------)
	I0603 10:56:41.125959   25542 main.go:141] libmachine: (ha-683480) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:56:41.125973   25542 main.go:141] libmachine: (ha-683480) DBG | About to run SSH command:
	I0603 10:56:41.125987   25542 main.go:141] libmachine: (ha-683480) DBG | exit 0
	I0603 10:56:41.246929   25542 main.go:141] libmachine: (ha-683480) DBG | SSH cmd err, output: <nil>: 
	I0603 10:56:41.247185   25542 main.go:141] libmachine: (ha-683480) KVM machine creation complete!
	I0603 10:56:41.247555   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:41.248120   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:41.248311   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:41.248472   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:56:41.248487   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:56:41.249731   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:56:41.249747   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:56:41.249755   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:56:41.249761   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.251822   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.252116   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.252144   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.252271   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.252422   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.252565   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.252668   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.252813   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.253034   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.253046   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:56:41.350001   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:56:41.350019   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:56:41.350025   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.352309   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.352690   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.352716   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.352889   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.353078   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.353219   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.353356   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.353537   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.353715   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.353730   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:56:41.451228   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:56:41.451285   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:56:41.451295   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:56:41.451302   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.451520   25542 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 10:56:41.451534   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.451680   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.454319   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.454628   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.454654   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.454777   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.454925   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.455082   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.455211   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.455344   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.455505   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.455516   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 10:56:41.564791   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 10:56:41.564821   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.567404   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.567738   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.567766   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.567905   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.568088   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.568238   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.568414   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.568578   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.568771   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.568787   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:56:41.675419   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:56:41.675449   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:56:41.675467   25542 buildroot.go:174] setting up certificates
	I0603 10:56:41.675476   25542 provision.go:84] configureAuth start
	I0603 10:56:41.675484   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.675773   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:41.677879   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.678224   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.678246   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.678378   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.680284   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.680554   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.680585   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.680698   25542 provision.go:143] copyHostCerts
	I0603 10:56:41.680736   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:56:41.680773   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:56:41.680784   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:56:41.680849   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:56:41.680942   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:56:41.680960   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:56:41.680966   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:56:41.680995   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:56:41.681033   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:56:41.681048   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:56:41.681054   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:56:41.681073   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:56:41.681122   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 10:56:41.980610   25542 provision.go:177] copyRemoteCerts
	I0603 10:56:41.980666   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:56:41.980691   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.983250   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.983579   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.983610   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.983713   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.983900   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.984059   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.984174   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.065833   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:56:42.065930   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:56:42.090238   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:56:42.090310   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0603 10:56:42.113467   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:56:42.113526   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 10:56:42.135642   25542 provision.go:87] duration metric: took 460.154058ms to configureAuth
	I0603 10:56:42.135662   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:56:42.135827   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:56:42.135907   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.138641   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.138939   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.138965   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.139114   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.139297   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.139464   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.139623   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.139801   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:42.139952   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:42.139966   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:56:42.399570   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:56:42.399613   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:56:42.399623   25542 main.go:141] libmachine: (ha-683480) Calling .GetURL
	I0603 10:56:42.400966   25542 main.go:141] libmachine: (ha-683480) DBG | Using libvirt version 6000000
	I0603 10:56:42.403271   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.403596   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.403617   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.403772   25542 main.go:141] libmachine: Docker is up and running!
	I0603 10:56:42.403788   25542 main.go:141] libmachine: Reticulating splines...
	I0603 10:56:42.403808   25542 client.go:171] duration metric: took 27.856538118s to LocalClient.Create
	I0603 10:56:42.403836   25542 start.go:167] duration metric: took 27.856596844s to libmachine.API.Create "ha-683480"
	I0603 10:56:42.403848   25542 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 10:56:42.403865   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:56:42.403886   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.404121   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:56:42.404141   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.406277   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.406605   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.406632   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.406743   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.406911   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.407079   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.407248   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.485188   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:56:42.489159   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:56:42.489184   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:56:42.489244   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:56:42.489327   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 10:56:42.489337   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 10:56:42.489433   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 10:56:42.498654   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:56:42.521037   25542 start.go:296] duration metric: took 117.175393ms for postStartSetup
	I0603 10:56:42.521088   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:42.521611   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:42.524045   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.524380   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.524406   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.524583   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:56:42.524766   25542 start.go:128] duration metric: took 27.994305593s to createHost
	I0603 10:56:42.524788   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.526735   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.527027   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.527068   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.527199   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.527344   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.527477   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.527654   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.527807   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:42.528002   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:42.528013   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:56:42.627415   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412202.609097595
	
	I0603 10:56:42.627433   25542 fix.go:216] guest clock: 1717412202.609097595
	I0603 10:56:42.627441   25542 fix.go:229] Guest: 2024-06-03 10:56:42.609097595 +0000 UTC Remote: 2024-06-03 10:56:42.524778402 +0000 UTC m=+28.091417474 (delta=84.319193ms)
	I0603 10:56:42.627483   25542 fix.go:200] guest clock delta is within tolerance: 84.319193ms
	I0603 10:56:42.627491   25542 start.go:83] releasing machines lock for "ha-683480", held for 28.097092936s
	I0603 10:56:42.627516   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.627736   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:42.630073   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.630422   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.630450   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.630554   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.630954   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.631128   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.631209   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:56:42.631265   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.631291   25542 ssh_runner.go:195] Run: cat /version.json
	I0603 10:56:42.631310   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.633628   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.633946   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.633979   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634006   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634228   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.634347   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.634373   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634398   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.634545   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.634554   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.634708   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.634705   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.634860   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.634993   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.708170   25542 ssh_runner.go:195] Run: systemctl --version
	I0603 10:56:42.731833   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:56:42.889105   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:56:42.895506   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:56:42.895572   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:56:42.912227   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:56:42.912245   25542 start.go:494] detecting cgroup driver to use...
	I0603 10:56:42.912303   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:56:42.927958   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:56:42.940924   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:56:42.940963   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:56:42.953568   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:56:42.966535   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:56:43.079194   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:56:43.239076   25542 docker.go:233] disabling docker service ...
	I0603 10:56:43.239138   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:56:43.253472   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:56:43.265915   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:56:43.378615   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:56:43.489311   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:56:43.503088   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:56:43.520846   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:56:43.520913   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.531032   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:56:43.531111   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.541395   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.551658   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.561729   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:56:43.572178   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.582365   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.598904   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.609044   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:56:43.618167   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:56:43.618204   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:56:43.630645   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:56:43.639855   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:56:43.747331   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:56:43.878164   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:56:43.878224   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:56:43.882915   25542 start.go:562] Will wait 60s for crictl version
	I0603 10:56:43.882965   25542 ssh_runner.go:195] Run: which crictl
	I0603 10:56:43.886667   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:56:43.931515   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:56:43.931597   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:56:43.958565   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:56:43.988172   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:56:43.989315   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:43.991640   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:43.991964   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:43.991990   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:43.992200   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:56:43.996256   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:56:44.008861   25542 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 10:56:44.008953   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:56:44.008997   25542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:56:44.041143   25542 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 10:56:44.041209   25542 ssh_runner.go:195] Run: which lz4
	I0603 10:56:44.045081   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 10:56:44.045170   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 10:56:44.049359   25542 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 10:56:44.049383   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 10:56:45.405635   25542 crio.go:462] duration metric: took 1.360493385s to copy over tarball
	I0603 10:56:45.405698   25542 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 10:56:47.458922   25542 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05319829s)
	I0603 10:56:47.458954   25542 crio.go:469] duration metric: took 2.053292515s to extract the tarball
	I0603 10:56:47.458963   25542 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 10:56:47.498260   25542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:56:47.541753   25542 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 10:56:47.541779   25542 cache_images.go:84] Images are preloaded, skipping loading
	I0603 10:56:47.541788   25542 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 10:56:47.541906   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:56:47.541983   25542 ssh_runner.go:195] Run: crio config
	I0603 10:56:47.593386   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:56:47.593406   25542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 10:56:47.593414   25542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 10:56:47.593436   25542 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 10:56:47.593585   25542 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 10:56:47.593611   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 10:56:47.593646   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 10:56:47.612578   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 10:56:47.612679   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 10:56:47.612738   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:56:47.622669   25542 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 10:56:47.622725   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 10:56:47.632141   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 10:56:47.647848   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:56:47.663454   25542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 10:56:47.679259   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 10:56:47.694988   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 10:56:47.698620   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:56:47.710448   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:56:47.828098   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:56:47.844245   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 10:56:47.844270   25542 certs.go:194] generating shared ca certs ...
	I0603 10:56:47.844291   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:47.844468   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:56:47.844521   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:56:47.844534   25542 certs.go:256] generating profile certs ...
	I0603 10:56:47.844599   25542 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 10:56:47.844618   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt with IP's: []
	I0603 10:56:48.062533   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt ...
	I0603 10:56:48.062560   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt: {Name:mk5567ccfc9c4b9fcf1085bdad543fc3e68e1772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.062722   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key ...
	I0603 10:56:48.062733   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key: {Name:mkb56f24577c32390a1bb550ce6a067617b186f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.062809   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60
	I0603 10:56:48.062824   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.254]
	I0603 10:56:48.520493   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 ...
	I0603 10:56:48.520521   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60: {Name:mk9f3c195de608bf5816447c8c67f7100921af0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.520665   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60 ...
	I0603 10:56:48.520677   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60: {Name:mk1907058b2f028047f581cac4eeb38e528fcfc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.520745   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 10:56:48.520826   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 10:56:48.520881   25542 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 10:56:48.520895   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt with IP's: []
	I0603 10:56:48.845023   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt ...
	I0603 10:56:48.845051   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt: {Name:mk6bc5663a3284bfe966796c7ffb8b75d9f5a053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.845203   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key ...
	I0603 10:56:48.845214   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key: {Name:mkc971faeb60f06145787f9880e809afdc0bbafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.845276   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 10:56:48.845292   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 10:56:48.845301   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 10:56:48.845314   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 10:56:48.845323   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 10:56:48.845336   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 10:56:48.845345   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 10:56:48.845354   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 10:56:48.845398   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 10:56:48.845430   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 10:56:48.845439   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:56:48.845460   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:56:48.845482   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:56:48.845502   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:56:48.845536   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:56:48.845562   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 10:56:48.845576   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:48.845588   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 10:56:48.846089   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:56:48.881541   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:56:48.907819   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:56:48.930607   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:56:48.954060   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 10:56:48.977147   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 10:56:49.000037   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:56:49.023238   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 10:56:49.046136   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 10:56:49.069087   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:56:49.091687   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 10:56:49.117389   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 10:56:49.133745   25542 ssh_runner.go:195] Run: openssl version
	I0603 10:56:49.139785   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:56:49.151514   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.156083   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.156126   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.162060   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:56:49.173439   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 10:56:49.184803   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.189274   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.189318   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.195094   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 10:56:49.206208   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 10:56:49.217268   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.221759   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.221805   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.227503   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 10:56:49.238402   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:56:49.242543   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:56:49.242598   25542 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:56:49.242699   25542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 10:56:49.242738   25542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 10:56:49.285323   25542 cri.go:89] found id: ""
	I0603 10:56:49.285398   25542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 10:56:49.297631   25542 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 10:56:49.307821   25542 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 10:56:49.317548   25542 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 10:56:49.317562   25542 kubeadm.go:156] found existing configuration files:
	
	I0603 10:56:49.317599   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 10:56:49.326953   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 10:56:49.327005   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 10:56:49.336406   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 10:56:49.345346   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 10:56:49.345395   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 10:56:49.355318   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 10:56:49.365001   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 10:56:49.365052   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 10:56:49.375216   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 10:56:49.384102   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 10:56:49.384141   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 10:56:49.393669   25542 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 10:56:49.632642   25542 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 10:57:00.771093   25542 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 10:57:00.771149   25542 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 10:57:00.771258   25542 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 10:57:00.771398   25542 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 10:57:00.771535   25542 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 10:57:00.771614   25542 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 10:57:00.773037   25542 out.go:204]   - Generating certificates and keys ...
	I0603 10:57:00.773119   25542 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 10:57:00.773207   25542 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 10:57:00.773281   25542 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 10:57:00.773342   25542 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 10:57:00.773426   25542 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 10:57:00.773492   25542 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 10:57:00.773566   25542 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 10:57:00.773692   25542 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-683480 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0603 10:57:00.773766   25542 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 10:57:00.773896   25542 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-683480 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0603 10:57:00.773990   25542 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 10:57:00.774108   25542 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 10:57:00.774158   25542 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 10:57:00.774228   25542 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 10:57:00.774289   25542 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 10:57:00.774367   25542 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 10:57:00.774456   25542 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 10:57:00.774508   25542 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 10:57:00.774554   25542 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 10:57:00.774619   25542 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 10:57:00.774675   25542 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 10:57:00.775900   25542 out.go:204]   - Booting up control plane ...
	I0603 10:57:00.775991   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 10:57:00.776054   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 10:57:00.776132   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 10:57:00.776240   25542 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 10:57:00.776342   25542 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 10:57:00.776410   25542 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 10:57:00.776547   25542 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 10:57:00.776632   25542 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 10:57:00.776717   25542 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.393485ms
	I0603 10:57:00.776804   25542 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 10:57:00.776888   25542 kubeadm.go:309] [api-check] The API server is healthy after 6.006452478s
	I0603 10:57:00.777028   25542 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 10:57:00.777187   25542 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 10:57:00.777266   25542 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 10:57:00.777492   25542 kubeadm.go:309] [mark-control-plane] Marking the node ha-683480 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 10:57:00.777559   25542 kubeadm.go:309] [bootstrap-token] Using token: q8elef.uwid3umlrwl04c9q
	I0603 10:57:00.778892   25542 out.go:204]   - Configuring RBAC rules ...
	I0603 10:57:00.778977   25542 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 10:57:00.779065   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 10:57:00.779221   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 10:57:00.779348   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 10:57:00.779489   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 10:57:00.779590   25542 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 10:57:00.779731   25542 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 10:57:00.779774   25542 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 10:57:00.779817   25542 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 10:57:00.779823   25542 kubeadm.go:309] 
	I0603 10:57:00.779889   25542 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 10:57:00.779898   25542 kubeadm.go:309] 
	I0603 10:57:00.780017   25542 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 10:57:00.780026   25542 kubeadm.go:309] 
	I0603 10:57:00.780068   25542 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 10:57:00.780156   25542 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 10:57:00.780232   25542 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 10:57:00.780244   25542 kubeadm.go:309] 
	I0603 10:57:00.780332   25542 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 10:57:00.780351   25542 kubeadm.go:309] 
	I0603 10:57:00.780389   25542 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 10:57:00.780395   25542 kubeadm.go:309] 
	I0603 10:57:00.780437   25542 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 10:57:00.780498   25542 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 10:57:00.780557   25542 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 10:57:00.780563   25542 kubeadm.go:309] 
	I0603 10:57:00.780627   25542 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 10:57:00.780692   25542 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 10:57:00.780698   25542 kubeadm.go:309] 
	I0603 10:57:00.780763   25542 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q8elef.uwid3umlrwl04c9q \
	I0603 10:57:00.780860   25542 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 10:57:00.780904   25542 kubeadm.go:309] 	--control-plane 
	I0603 10:57:00.780918   25542 kubeadm.go:309] 
	I0603 10:57:00.781031   25542 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 10:57:00.781038   25542 kubeadm.go:309] 
	I0603 10:57:00.781120   25542 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q8elef.uwid3umlrwl04c9q \
	I0603 10:57:00.781237   25542 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 10:57:00.781248   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:57:00.781253   25542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 10:57:00.782548   25542 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 10:57:00.783673   25542 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 10:57:00.789298   25542 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 10:57:00.789311   25542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 10:57:00.807718   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 10:57:01.177801   25542 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 10:57:01.177898   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:01.177928   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480 minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=true
	I0603 10:57:01.244001   25542 ops.go:34] apiserver oom_adj: -16
	I0603 10:57:01.350406   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:01.851349   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:02.351098   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:02.851145   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:03.350923   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:03.850929   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:04.350931   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:04.851245   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:05.351031   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:05.850412   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:06.350628   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:06.851025   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:07.350651   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:07.850624   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:08.350938   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:08.850778   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:09.350723   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:09.850455   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:10.351365   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:10.851028   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:11.351134   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:11.850482   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:12.351346   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:12.850434   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:13.350776   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:13.450935   25542 kubeadm.go:1107] duration metric: took 12.273102783s to wait for elevateKubeSystemPrivileges
	W0603 10:57:13.450966   25542 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 10:57:13.450975   25542 kubeadm.go:393] duration metric: took 24.208380078s to StartCluster
	I0603 10:57:13.450993   25542 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:13.451092   25542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:57:13.451638   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:13.451815   25542 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:57:13.451834   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 10:57:13.451848   25542 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 10:57:13.451905   25542 addons.go:69] Setting storage-provisioner=true in profile "ha-683480"
	I0603 10:57:13.451842   25542 start.go:240] waiting for startup goroutines ...
	I0603 10:57:13.451938   25542 addons.go:234] Setting addon storage-provisioner=true in "ha-683480"
	I0603 10:57:13.451943   25542 addons.go:69] Setting default-storageclass=true in profile "ha-683480"
	I0603 10:57:13.451965   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:13.451972   25542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-683480"
	I0603 10:57:13.452025   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:13.452281   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.452308   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.452315   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.452348   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.466989   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36615
	I0603 10:57:13.467031   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0603 10:57:13.467385   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.467466   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.467917   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.467945   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.468018   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.468039   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.468288   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.468484   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.468512   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.468982   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.469010   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.470567   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:57:13.470813   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 10:57:13.471258   25542 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 10:57:13.471439   25542 addons.go:234] Setting addon default-storageclass=true in "ha-683480"
	I0603 10:57:13.471468   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:13.471711   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.471745   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.483869   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0603 10:57:13.484336   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.484815   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.484843   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.485225   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.485430   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.486116   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0603 10:57:13.486483   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.486955   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.486977   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.487258   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:13.487324   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.489101   25542 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 10:57:13.487795   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.490256   25542 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:57:13.490269   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 10:57:13.490281   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:13.489136   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.493061   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.493530   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:13.493558   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.493702   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:13.493854   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:13.493983   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:13.494126   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:13.505344   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I0603 10:57:13.505727   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.506272   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.506290   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.506663   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.506840   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.508131   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:13.508300   25542 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 10:57:13.508313   25542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 10:57:13.508326   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:13.511018   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.511464   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:13.511490   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.511609   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:13.511742   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:13.511872   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:13.512027   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:13.577094   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 10:57:13.650514   25542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:57:13.694970   25542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 10:57:14.112758   25542 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 10:57:14.389503   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389530   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389534   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389545   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389827   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.389845   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.389853   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389860   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389866   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.389904   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.389906   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.389921   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.389931   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389941   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.391412   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.391429   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.391416   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.391444   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.391450   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.391575   25542 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 10:57:14.391591   25542 round_trippers.go:469] Request Headers:
	I0603 10:57:14.391602   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:57:14.391608   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:57:14.405075   25542 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 10:57:14.405550   25542 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 10:57:14.405564   25542 round_trippers.go:469] Request Headers:
	I0603 10:57:14.405571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:57:14.405575   25542 round_trippers.go:473]     Content-Type: application/json
	I0603 10:57:14.405579   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:57:14.408132   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:57:14.408343   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.408358   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.408582   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.408604   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.408636   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.410150   25542 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 10:57:14.411371   25542 addons.go:510] duration metric: took 959.517318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 10:57:14.411421   25542 start.go:245] waiting for cluster config update ...
	I0603 10:57:14.411435   25542 start.go:254] writing updated cluster config ...
	I0603 10:57:14.412828   25542 out.go:177] 
	I0603 10:57:14.413974   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:14.414032   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:14.415360   25542 out.go:177] * Starting "ha-683480-m02" control-plane node in "ha-683480" cluster
	I0603 10:57:14.416242   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:57:14.416260   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:57:14.416323   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:57:14.416334   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:57:14.416393   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:14.416567   25542 start.go:360] acquireMachinesLock for ha-683480-m02: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:57:14.416614   25542 start.go:364] duration metric: took 26.687µs to acquireMachinesLock for "ha-683480-m02"
	I0603 10:57:14.416639   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:57:14.416727   25542 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0603 10:57:14.418166   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:57:14.418228   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:14.418250   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:14.432219   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0603 10:57:14.432598   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:14.433005   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:14.433027   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:14.433373   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:14.433550   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:14.433658   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:14.433802   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:57:14.433830   25542 client.go:168] LocalClient.Create starting
	I0603 10:57:14.433860   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:57:14.433896   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:57:14.433916   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:57:14.433978   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:57:14.434007   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:57:14.434024   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:57:14.434048   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:57:14.434060   25542 main.go:141] libmachine: (ha-683480-m02) Calling .PreCreateCheck
	I0603 10:57:14.434215   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:14.434590   25542 main.go:141] libmachine: Creating machine...
	I0603 10:57:14.434604   25542 main.go:141] libmachine: (ha-683480-m02) Calling .Create
	I0603 10:57:14.434715   25542 main.go:141] libmachine: (ha-683480-m02) Creating KVM machine...
	I0603 10:57:14.435989   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found existing default KVM network
	I0603 10:57:14.436155   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found existing private KVM network mk-ha-683480
	I0603 10:57:14.436266   25542 main.go:141] libmachine: (ha-683480-m02) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 ...
	I0603 10:57:14.436285   25542 main.go:141] libmachine: (ha-683480-m02) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:57:14.436354   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:14.436263   25955 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:57:14.436467   25542 main.go:141] libmachine: (ha-683480-m02) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:57:14.655395   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:14.655264   25955 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa...
	I0603 10:57:15.185299   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:15.185194   25955 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/ha-683480-m02.rawdisk...
	I0603 10:57:15.185331   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Writing magic tar header
	I0603 10:57:15.185347   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Writing SSH key tar header
	I0603 10:57:15.185360   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:15.185297   25955 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 ...
	I0603 10:57:15.185455   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02
	I0603 10:57:15.185489   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:57:15.185506   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 (perms=drwx------)
	I0603 10:57:15.185525   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:57:15.185540   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:57:15.185554   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:57:15.185569   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:57:15.185582   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:57:15.185594   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:57:15.185610   25542 main.go:141] libmachine: (ha-683480-m02) Creating domain...
	I0603 10:57:15.185627   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:57:15.185640   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:57:15.185654   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:57:15.185670   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home
	I0603 10:57:15.185685   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Skipping /home - not owner
	I0603 10:57:15.186408   25542 main.go:141] libmachine: (ha-683480-m02) define libvirt domain using xml: 
	I0603 10:57:15.186431   25542 main.go:141] libmachine: (ha-683480-m02) <domain type='kvm'>
	I0603 10:57:15.186442   25542 main.go:141] libmachine: (ha-683480-m02)   <name>ha-683480-m02</name>
	I0603 10:57:15.186450   25542 main.go:141] libmachine: (ha-683480-m02)   <memory unit='MiB'>2200</memory>
	I0603 10:57:15.186458   25542 main.go:141] libmachine: (ha-683480-m02)   <vcpu>2</vcpu>
	I0603 10:57:15.186465   25542 main.go:141] libmachine: (ha-683480-m02)   <features>
	I0603 10:57:15.186473   25542 main.go:141] libmachine: (ha-683480-m02)     <acpi/>
	I0603 10:57:15.186479   25542 main.go:141] libmachine: (ha-683480-m02)     <apic/>
	I0603 10:57:15.186484   25542 main.go:141] libmachine: (ha-683480-m02)     <pae/>
	I0603 10:57:15.186488   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186495   25542 main.go:141] libmachine: (ha-683480-m02)   </features>
	I0603 10:57:15.186499   25542 main.go:141] libmachine: (ha-683480-m02)   <cpu mode='host-passthrough'>
	I0603 10:57:15.186504   25542 main.go:141] libmachine: (ha-683480-m02)   
	I0603 10:57:15.186512   25542 main.go:141] libmachine: (ha-683480-m02)   </cpu>
	I0603 10:57:15.186517   25542 main.go:141] libmachine: (ha-683480-m02)   <os>
	I0603 10:57:15.186522   25542 main.go:141] libmachine: (ha-683480-m02)     <type>hvm</type>
	I0603 10:57:15.186527   25542 main.go:141] libmachine: (ha-683480-m02)     <boot dev='cdrom'/>
	I0603 10:57:15.186533   25542 main.go:141] libmachine: (ha-683480-m02)     <boot dev='hd'/>
	I0603 10:57:15.186542   25542 main.go:141] libmachine: (ha-683480-m02)     <bootmenu enable='no'/>
	I0603 10:57:15.186546   25542 main.go:141] libmachine: (ha-683480-m02)   </os>
	I0603 10:57:15.186551   25542 main.go:141] libmachine: (ha-683480-m02)   <devices>
	I0603 10:57:15.186558   25542 main.go:141] libmachine: (ha-683480-m02)     <disk type='file' device='cdrom'>
	I0603 10:57:15.186565   25542 main.go:141] libmachine: (ha-683480-m02)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/boot2docker.iso'/>
	I0603 10:57:15.186570   25542 main.go:141] libmachine: (ha-683480-m02)       <target dev='hdc' bus='scsi'/>
	I0603 10:57:15.186578   25542 main.go:141] libmachine: (ha-683480-m02)       <readonly/>
	I0603 10:57:15.186585   25542 main.go:141] libmachine: (ha-683480-m02)     </disk>
	I0603 10:57:15.186591   25542 main.go:141] libmachine: (ha-683480-m02)     <disk type='file' device='disk'>
	I0603 10:57:15.186600   25542 main.go:141] libmachine: (ha-683480-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:57:15.186629   25542 main.go:141] libmachine: (ha-683480-m02)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/ha-683480-m02.rawdisk'/>
	I0603 10:57:15.186651   25542 main.go:141] libmachine: (ha-683480-m02)       <target dev='hda' bus='virtio'/>
	I0603 10:57:15.186665   25542 main.go:141] libmachine: (ha-683480-m02)     </disk>
	I0603 10:57:15.186677   25542 main.go:141] libmachine: (ha-683480-m02)     <interface type='network'>
	I0603 10:57:15.186689   25542 main.go:141] libmachine: (ha-683480-m02)       <source network='mk-ha-683480'/>
	I0603 10:57:15.186701   25542 main.go:141] libmachine: (ha-683480-m02)       <model type='virtio'/>
	I0603 10:57:15.186711   25542 main.go:141] libmachine: (ha-683480-m02)     </interface>
	I0603 10:57:15.186727   25542 main.go:141] libmachine: (ha-683480-m02)     <interface type='network'>
	I0603 10:57:15.186741   25542 main.go:141] libmachine: (ha-683480-m02)       <source network='default'/>
	I0603 10:57:15.186753   25542 main.go:141] libmachine: (ha-683480-m02)       <model type='virtio'/>
	I0603 10:57:15.186766   25542 main.go:141] libmachine: (ha-683480-m02)     </interface>
	I0603 10:57:15.186777   25542 main.go:141] libmachine: (ha-683480-m02)     <serial type='pty'>
	I0603 10:57:15.186790   25542 main.go:141] libmachine: (ha-683480-m02)       <target port='0'/>
	I0603 10:57:15.186806   25542 main.go:141] libmachine: (ha-683480-m02)     </serial>
	I0603 10:57:15.186820   25542 main.go:141] libmachine: (ha-683480-m02)     <console type='pty'>
	I0603 10:57:15.186831   25542 main.go:141] libmachine: (ha-683480-m02)       <target type='serial' port='0'/>
	I0603 10:57:15.186842   25542 main.go:141] libmachine: (ha-683480-m02)     </console>
	I0603 10:57:15.186856   25542 main.go:141] libmachine: (ha-683480-m02)     <rng model='virtio'>
	I0603 10:57:15.186871   25542 main.go:141] libmachine: (ha-683480-m02)       <backend model='random'>/dev/random</backend>
	I0603 10:57:15.186881   25542 main.go:141] libmachine: (ha-683480-m02)     </rng>
	I0603 10:57:15.186897   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186916   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186926   25542 main.go:141] libmachine: (ha-683480-m02)   </devices>
	I0603 10:57:15.186939   25542 main.go:141] libmachine: (ha-683480-m02) </domain>
	I0603 10:57:15.186953   25542 main.go:141] libmachine: (ha-683480-m02) 
	I0603 10:57:15.193041   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:3a:60:13 in network default
	I0603 10:57:15.193546   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring networks are active...
	I0603 10:57:15.193566   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:15.194084   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring network default is active
	I0603 10:57:15.194316   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring network mk-ha-683480 is active
	I0603 10:57:15.194621   25542 main.go:141] libmachine: (ha-683480-m02) Getting domain xml...
	I0603 10:57:15.195250   25542 main.go:141] libmachine: (ha-683480-m02) Creating domain...
	I0603 10:57:16.367029   25542 main.go:141] libmachine: (ha-683480-m02) Waiting to get IP...
	I0603 10:57:16.367813   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.368282   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.368378   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.368290   25955 retry.go:31] will retry after 193.520583ms: waiting for machine to come up
	I0603 10:57:16.563737   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.564186   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.564211   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.564136   25955 retry.go:31] will retry after 307.356676ms: waiting for machine to come up
	I0603 10:57:16.873758   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.874264   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.874284   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.874225   25955 retry.go:31] will retry after 472.611486ms: waiting for machine to come up
	I0603 10:57:17.349612   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:17.350085   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:17.350120   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:17.350025   25955 retry.go:31] will retry after 591.878376ms: waiting for machine to come up
	I0603 10:57:17.943698   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:17.944257   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:17.944284   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:17.944211   25955 retry.go:31] will retry after 519.190327ms: waiting for machine to come up
	I0603 10:57:18.464918   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:18.465352   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:18.465378   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:18.465309   25955 retry.go:31] will retry after 731.947356ms: waiting for machine to come up
	I0603 10:57:19.199086   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:19.199606   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:19.199663   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:19.199578   25955 retry.go:31] will retry after 811.745735ms: waiting for machine to come up
	I0603 10:57:20.012877   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:20.013282   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:20.013311   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:20.013223   25955 retry.go:31] will retry after 1.069722903s: waiting for machine to come up
	I0603 10:57:21.084068   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:21.084430   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:21.084455   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:21.084391   25955 retry.go:31] will retry after 1.701630144s: waiting for machine to come up
	I0603 10:57:22.788183   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:22.788532   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:22.788560   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:22.788496   25955 retry.go:31] will retry after 2.200034704s: waiting for machine to come up
	I0603 10:57:24.990706   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:24.991153   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:24.991180   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:24.991102   25955 retry.go:31] will retry after 2.006922002s: waiting for machine to come up
	I0603 10:57:27.000099   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:27.000520   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:27.000551   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:27.000478   25955 retry.go:31] will retry after 3.012739848s: waiting for machine to come up
	I0603 10:57:30.014260   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:30.014617   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:30.014645   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:30.014569   25955 retry.go:31] will retry after 3.749957057s: waiting for machine to come up
	I0603 10:57:33.768377   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:33.768786   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:33.768814   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:33.768748   25955 retry.go:31] will retry after 4.367337728s: waiting for machine to come up
	I0603 10:57:38.140449   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.140780   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has current primary IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.140812   25542 main.go:141] libmachine: (ha-683480-m02) Found IP for machine: 192.168.39.127
	I0603 10:57:38.140826   25542 main.go:141] libmachine: (ha-683480-m02) Reserving static IP address...
	I0603 10:57:38.141205   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find host DHCP lease matching {name: "ha-683480-m02", mac: "52:54:00:00:55:50", ip: "192.168.39.127"} in network mk-ha-683480
	I0603 10:57:38.210897   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Getting to WaitForSSH function...
	I0603 10:57:38.210931   25542 main.go:141] libmachine: (ha-683480-m02) Reserved static IP address: 192.168.39.127
	I0603 10:57:38.210944   25542 main.go:141] libmachine: (ha-683480-m02) Waiting for SSH to be available...
	I0603 10:57:38.213534   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.213888   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.213910   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.214073   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using SSH client type: external
	I0603 10:57:38.214097   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa (-rw-------)
	I0603 10:57:38.214129   25542 main.go:141] libmachine: (ha-683480-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:57:38.214139   25542 main.go:141] libmachine: (ha-683480-m02) DBG | About to run SSH command:
	I0603 10:57:38.214149   25542 main.go:141] libmachine: (ha-683480-m02) DBG | exit 0
	I0603 10:57:38.339014   25542 main.go:141] libmachine: (ha-683480-m02) DBG | SSH cmd err, output: <nil>: 
	I0603 10:57:38.339297   25542 main.go:141] libmachine: (ha-683480-m02) KVM machine creation complete!
	I0603 10:57:38.339651   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:38.340266   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:38.340453   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:38.340608   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:57:38.340624   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 10:57:38.341870   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:57:38.341886   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:57:38.341897   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:57:38.341907   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.344129   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.344460   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.344484   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.344614   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.344772   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.344907   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.345048   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.345204   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.345429   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.345447   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:57:38.454187   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:57:38.454213   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:57:38.454226   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.457069   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.457474   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.457502   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.457644   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.457862   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.458059   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.458221   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.458416   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.458568   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.458578   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:57:38.567543   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:57:38.567592   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:57:38.567598   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:57:38.567605   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.567885   25542 buildroot.go:166] provisioning hostname "ha-683480-m02"
	I0603 10:57:38.567912   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.568110   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.570679   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.571058   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.571091   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.571206   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.571372   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.571513   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.571611   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.571766   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.571925   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.571938   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480-m02 && echo "ha-683480-m02" | sudo tee /etc/hostname
	I0603 10:57:38.699854   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480-m02
	
	I0603 10:57:38.699883   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.702515   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.702888   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.702914   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.703106   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.703302   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.703430   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.703574   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.703754   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.703899   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.703914   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:57:38.825075   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:57:38.825101   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:57:38.825120   25542 buildroot.go:174] setting up certificates
	I0603 10:57:38.825130   25542 provision.go:84] configureAuth start
	I0603 10:57:38.825142   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.825434   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:38.827697   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.828103   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.828124   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.828205   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.830403   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.830720   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.830747   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.830907   25542 provision.go:143] copyHostCerts
	I0603 10:57:38.830943   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:57:38.830981   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:57:38.830993   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:57:38.831090   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:57:38.831210   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:57:38.831236   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:57:38.831243   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:57:38.831285   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:57:38.831358   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:57:38.831381   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:57:38.831390   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:57:38.831423   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:57:38.831488   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480-m02 san=[127.0.0.1 192.168.39.127 ha-683480-m02 localhost minikube]
	I0603 10:57:39.107965   25542 provision.go:177] copyRemoteCerts
	I0603 10:57:39.108014   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:57:39.108035   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.110672   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.111004   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.111027   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.111216   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.111402   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.111574   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.111710   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.197801   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:57:39.197910   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:57:39.222353   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:57:39.222414   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:57:39.245793   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:57:39.245849   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 10:57:39.269140   25542 provision.go:87] duration metric: took 443.997515ms to configureAuth
	I0603 10:57:39.269166   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:57:39.269358   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:39.269435   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.271993   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.272380   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.272405   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.272569   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.272752   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.272923   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.273026   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.273151   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:39.273294   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:39.273307   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:57:39.545206   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:57:39.545231   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:57:39.545241   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetURL
	I0603 10:57:39.546449   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using libvirt version 6000000
	I0603 10:57:39.548732   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.548969   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.548992   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.549112   25542 main.go:141] libmachine: Docker is up and running!
	I0603 10:57:39.549133   25542 main.go:141] libmachine: Reticulating splines...
	I0603 10:57:39.549141   25542 client.go:171] duration metric: took 25.115303041s to LocalClient.Create
	I0603 10:57:39.549168   25542 start.go:167] duration metric: took 25.115364199s to libmachine.API.Create "ha-683480"
	I0603 10:57:39.549180   25542 start.go:293] postStartSetup for "ha-683480-m02" (driver="kvm2")
	I0603 10:57:39.549189   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:57:39.549214   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.549468   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:57:39.549497   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.551413   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.551696   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.551724   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.551851   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.552006   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.552150   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.552272   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.637726   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:57:39.641942   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:57:39.641968   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:57:39.642050   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:57:39.642123   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 10:57:39.642134   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 10:57:39.642213   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 10:57:39.653742   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:57:39.678630   25542 start.go:296] duration metric: took 129.438249ms for postStartSetup
	I0603 10:57:39.678681   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:39.679251   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:39.681795   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.682126   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.682154   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.682417   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:39.682615   25542 start.go:128] duration metric: took 25.265871916s to createHost
	I0603 10:57:39.682648   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.684431   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.684696   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.684718   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.684848   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.685001   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.685174   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.685302   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.685451   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:39.685594   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:39.685603   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:57:39.795626   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412259.774276019
	
	I0603 10:57:39.795648   25542 fix.go:216] guest clock: 1717412259.774276019
	I0603 10:57:39.795657   25542 fix.go:229] Guest: 2024-06-03 10:57:39.774276019 +0000 UTC Remote: 2024-06-03 10:57:39.682626665 +0000 UTC m=+85.249265737 (delta=91.649354ms)
	I0603 10:57:39.795677   25542 fix.go:200] guest clock delta is within tolerance: 91.649354ms
	I0603 10:57:39.795683   25542 start.go:83] releasing machines lock for "ha-683480-m02", held for 25.379057048s
	I0603 10:57:39.795701   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.795919   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:39.798489   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.798870   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.798900   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.801119   25542 out.go:177] * Found network options:
	I0603 10:57:39.802274   25542 out.go:177]   - NO_PROXY=192.168.39.116
	W0603 10:57:39.803350   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 10:57:39.803374   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.803860   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.804050   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.804125   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:57:39.804165   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	W0603 10:57:39.804248   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 10:57:39.804302   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:57:39.804316   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.806531   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806823   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.806852   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806870   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806942   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.807110   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.807247   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.807348   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.807371   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.807368   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.807512   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.807626   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.807761   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.807895   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:40.039708   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:57:40.046153   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:57:40.046209   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:57:40.061772   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:57:40.061788   25542 start.go:494] detecting cgroup driver to use...
	I0603 10:57:40.061842   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:57:40.076598   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:57:40.089894   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:57:40.089939   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:57:40.102789   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:57:40.115706   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:57:40.225777   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:57:40.385561   25542 docker.go:233] disabling docker service ...
	I0603 10:57:40.385622   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:57:40.399183   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:57:40.411841   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:57:40.523097   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:57:40.637561   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:57:40.652100   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:57:40.670295   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:57:40.670367   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.680163   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:57:40.680221   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.690290   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.700046   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.709768   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:57:40.719767   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.729463   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.746472   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.756435   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:57:40.765231   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:57:40.765319   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:57:40.777951   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:57:40.788285   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:57:40.905692   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:57:41.042950   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:57:41.043016   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:57:41.048023   25542 start.go:562] Will wait 60s for crictl version
	I0603 10:57:41.048076   25542 ssh_runner.go:195] Run: which crictl
	I0603 10:57:41.052322   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:57:41.096016   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:57:41.096103   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:57:41.124778   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:57:41.155389   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:57:41.156685   25542 out.go:177]   - env NO_PROXY=192.168.39.116
	I0603 10:57:41.157904   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:41.160497   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:41.160893   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:41.160920   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:41.161055   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:57:41.165366   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:57:41.178874   25542 mustload.go:65] Loading cluster: ha-683480
	I0603 10:57:41.179097   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:41.179344   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:41.179376   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:41.193764   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39411
	I0603 10:57:41.194198   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:41.194655   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:41.194675   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:41.194972   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:41.195171   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:41.196477   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:41.196781   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:41.196804   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:41.210511   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0603 10:57:41.210826   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:41.211266   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:41.211285   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:41.211553   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:41.211720   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:41.211879   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.127
	I0603 10:57:41.211890   25542 certs.go:194] generating shared ca certs ...
	I0603 10:57:41.211904   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.212011   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:57:41.212045   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:57:41.212054   25542 certs.go:256] generating profile certs ...
	I0603 10:57:41.212127   25542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 10:57:41.212151   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a
	I0603 10:57:41.212161   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.254]
	I0603 10:57:41.313930   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a ...
	I0603 10:57:41.313956   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a: {Name:mk82fc865ddfb68fa754de6f4eba20c9bc7c6964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.314111   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a ...
	I0603 10:57:41.314124   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a: {Name:mkd0f087221cc24ed79a087b514f4c1dd28e3227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.314194   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 10:57:41.314317   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 10:57:41.314442   25542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 10:57:41.314456   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 10:57:41.314469   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 10:57:41.314481   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 10:57:41.314494   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 10:57:41.314506   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 10:57:41.314518   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 10:57:41.314531   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 10:57:41.314542   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 10:57:41.314587   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 10:57:41.314614   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 10:57:41.314622   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:57:41.314644   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:57:41.314664   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:57:41.314686   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:57:41.314723   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:57:41.314748   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.314761   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.314773   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.314801   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:41.317527   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:41.317887   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:41.317911   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:41.318055   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:41.318249   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:41.318401   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:41.318542   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:41.387316   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 10:57:41.392115   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 10:57:41.403105   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 10:57:41.407356   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 10:57:41.417439   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 10:57:41.422132   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 10:57:41.432447   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 10:57:41.436665   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 10:57:41.446768   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 10:57:41.450991   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 10:57:41.461057   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 10:57:41.465043   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 10:57:41.474845   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:57:41.498864   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:57:41.521581   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:57:41.543831   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:57:41.566471   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 10:57:41.589239   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 10:57:41.611473   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:57:41.634713   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 10:57:41.657358   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:57:41.682172   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 10:57:41.705886   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 10:57:41.729122   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 10:57:41.745160   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 10:57:41.761102   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 10:57:41.778261   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 10:57:41.795474   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 10:57:41.811159   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 10:57:41.826896   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 10:57:41.842563   25542 ssh_runner.go:195] Run: openssl version
	I0603 10:57:41.848107   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 10:57:41.859616   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.864229   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.864272   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.870198   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 10:57:41.881198   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:57:41.891913   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.896305   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.896342   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.901756   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:57:41.914028   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 10:57:41.925655   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.931132   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.931180   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.938375   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 10:57:41.949776   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:57:41.953872   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:57:41.953915   25542 kubeadm.go:928] updating node {m02 192.168.39.127 8443 v1.30.1 crio true true} ...
	I0603 10:57:41.953983   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:57:41.954006   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 10:57:41.954038   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 10:57:41.971236   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 10:57:41.971313   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 10:57:41.971374   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:57:41.982282   25542 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 10:57:41.982344   25542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 10:57:41.993108   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 10:57:41.993121   25542 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0603 10:57:41.993133   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 10:57:41.993134   25542 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0603 10:57:41.993202   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 10:57:41.997423   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 10:57:41.997444   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 10:58:18.123910   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 10:58:18.123984   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 10:58:18.129999   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 10:58:18.130031   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 10:58:51.620104   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:58:51.638343   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 10:58:51.638421   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 10:58:51.642676   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 10:58:51.642719   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 10:58:52.033556   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 10:58:52.043954   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 10:58:52.060699   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:58:52.076907   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 10:58:52.092925   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 10:58:52.096916   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:58:52.109095   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:58:52.216097   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:58:52.233785   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:58:52.234283   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:58:52.234322   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:58:52.249709   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0603 10:58:52.250178   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:58:52.250679   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:58:52.250701   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:58:52.251084   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:58:52.251257   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:58:52.251417   25542 start.go:316] joinCluster: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:58:52.251534   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 10:58:52.251552   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:58:52.254492   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:58:52.254927   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:58:52.254957   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:58:52.255120   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:58:52.255297   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:58:52.255457   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:58:52.255630   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:58:52.445781   25542 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:58:52.445821   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 45r2ge.vg4p3ogqd7rtd0j6 --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m02 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0603 10:59:13.728022   25542 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 45r2ge.vg4p3ogqd7rtd0j6 --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m02 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (21.282174918s)
	I0603 10:59:13.728057   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 10:59:14.206114   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480-m02 minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=false
	I0603 10:59:14.348255   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683480-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 10:59:14.471943   25542 start.go:318] duration metric: took 22.22052051s to joinCluster
	I0603 10:59:14.472020   25542 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:59:14.473439   25542 out.go:177] * Verifying Kubernetes components...
	I0603 10:59:14.472321   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:14.474846   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:59:14.721737   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:59:14.795065   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:59:14.795409   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 10:59:14.795496   25542 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.116:8443
	I0603 10:59:14.795764   25542 node_ready.go:35] waiting up to 6m0s for node "ha-683480-m02" to be "Ready" ...
	I0603 10:59:14.795867   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:14.795879   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:14.795890   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:14.795899   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:14.805050   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 10:59:15.295982   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:15.296009   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:15.296022   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:15.296027   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:15.299923   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:15.796809   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:15.796827   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:15.796835   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:15.796839   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:15.800424   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:16.296386   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:16.296411   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:16.296423   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:16.296431   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:16.303226   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 10:59:16.796380   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:16.796405   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:16.796415   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:16.796420   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:16.799970   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:16.800631   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:17.296138   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:17.296158   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:17.296165   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:17.296169   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:17.299083   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:17.796246   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:17.796278   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:17.796286   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:17.796291   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:17.799561   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.296734   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:18.296758   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:18.296772   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:18.296777   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:18.299797   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.796932   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:18.796952   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:18.796960   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:18.796965   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:18.800420   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.801298   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:19.296296   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:19.296324   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:19.296341   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:19.296349   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:19.299486   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:19.796320   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:19.796343   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:19.796358   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:19.796363   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:19.799580   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:20.296268   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:20.296287   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:20.296294   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:20.296299   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:20.300385   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:20.795953   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:20.795973   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:20.795980   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:20.795986   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:20.799915   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:21.296836   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:21.296860   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:21.296871   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:21.296876   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:21.300005   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:21.300674   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:21.796704   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:21.796744   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:21.796755   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:21.796759   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:21.801287   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:22.296935   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:22.296961   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:22.296971   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:22.296976   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:22.300249   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:22.796320   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:22.796345   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:22.796355   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:22.796361   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:22.799775   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.296004   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.296040   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.296059   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.296070   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.298896   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.299718   25542 node_ready.go:49] node "ha-683480-m02" has status "Ready":"True"
	I0603 10:59:23.299738   25542 node_ready.go:38] duration metric: took 8.503950937s for node "ha-683480-m02" to be "Ready" ...
	I0603 10:59:23.299746   25542 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:59:23.299819   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:23.299828   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.299835   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.299839   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.304439   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:23.314315   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.314394   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8tqf9
	I0603 10:59:23.314405   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.314415   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.314420   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.317907   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.318966   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.318984   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.318994   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.319001   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.323496   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:23.324549   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.324564   25542 pod_ready.go:81] duration metric: took 10.228856ms for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.324572   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.324631   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nff86
	I0603 10:59:23.324643   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.324652   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.324662   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.328454   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.329513   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.329529   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.329536   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.329538   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.331852   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.332369   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.332388   25542 pod_ready.go:81] duration metric: took 7.810532ms for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.332396   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.332446   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480
	I0603 10:59:23.332461   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.332468   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.332471   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.335249   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.336130   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.336145   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.336153   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.336157   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.338251   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.339238   25542 pod_ready.go:92] pod "etcd-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.339253   25542 pod_ready.go:81] duration metric: took 6.850947ms for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.339260   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.339296   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:23.339303   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.339310   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.339315   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.341437   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.341999   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.342014   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.342023   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.342028   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.344086   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.840344   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:23.840366   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.840373   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.840379   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.844013   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.844649   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.844666   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.844675   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.844679   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.847109   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:24.340086   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:24.340119   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.340130   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.340136   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.343747   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.344583   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:24.344640   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.344656   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.344662   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.348368   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.839425   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:24.839446   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.839454   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.839457   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.842962   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.843818   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:24.843834   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.843841   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.843845   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.845967   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:25.339767   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:25.339788   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.339795   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.339799   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.343451   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:25.344145   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:25.344159   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.344166   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.344171   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.346649   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:25.347192   25542 pod_ready.go:102] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:25.840345   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:25.840365   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.840373   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.840379   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.843818   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:25.844551   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:25.844564   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.844571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.844575   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.847117   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:26.339821   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:26.339842   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.339851   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.339858   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.343170   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:26.344188   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:26.344202   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.344209   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.344212   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.346674   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:26.839724   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:26.839749   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.839761   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.839767   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.843870   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:26.844499   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:26.844516   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.844525   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.844529   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.847150   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.339548   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:27.339576   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.339588   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.339594   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.343134   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:27.343904   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:27.343919   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.343925   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.343928   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.346294   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.840207   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:27.840227   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.840236   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.840240   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.843264   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:27.844032   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:27.844045   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.844052   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.844055   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.846553   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.847404   25542 pod_ready.go:102] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:28.339812   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:28.339835   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.339845   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.339849   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.343744   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:28.344554   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.344569   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.344578   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.344584   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.347597   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:28.348487   25542 pod_ready.go:92] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:28.348506   25542 pod_ready.go:81] duration metric: took 5.009239248s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.348519   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.348594   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480
	I0603 10:59:28.348604   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.348612   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.348622   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.351554   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.352108   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:28.352119   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.352126   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.352130   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.354481   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.354952   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:28.354967   25542 pod_ready.go:81] duration metric: took 6.4382ms for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.354978   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.355025   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:28.355053   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.355064   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.355077   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.357702   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.358628   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.358642   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.358648   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.358651   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.361294   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.855299   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:28.855320   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.855326   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.855332   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.860770   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:28.861457   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.861473   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.861483   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.861488   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.863987   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:29.355965   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:29.355990   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.355998   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.356002   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.359321   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:29.360078   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:29.360095   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.360102   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.360108   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.362872   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:29.855871   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:29.855890   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.855897   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.855902   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.859738   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:29.860540   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:29.860562   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.860571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.860575   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.863400   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.355224   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:30.355244   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.355254   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.355261   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.358494   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:30.359284   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:30.359298   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.359307   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.359314   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.361972   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.362888   25542 pod_ready.go:102] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:30.855221   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:30.855243   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.855249   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.855251   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.858127   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.859190   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:30.859207   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.859216   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.859221   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.861744   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.355705   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:31.355725   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.355731   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.355735   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.358713   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.359512   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:31.359528   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.359535   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.359540   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.362020   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.362615   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.362632   25542 pod_ready.go:81] duration metric: took 3.007647373s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.362651   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.362697   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 10:59:31.362705   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.362712   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.362716   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.365357   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.365921   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.365935   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.365943   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.365946   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.368764   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.369301   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.369321   25542 pod_ready.go:81] duration metric: took 6.664259ms for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.369332   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.369396   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 10:59:31.369408   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.369418   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.369426   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.371750   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.372315   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:31.372329   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.372336   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.372344   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.374211   25542 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 10:59:31.374569   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.374583   25542 pod_ready.go:81] duration metric: took 5.245252ms for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.374591   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.496926   25542 request.go:629] Waited for 122.280858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 10:59:31.497012   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 10:59:31.497023   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.497033   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.497041   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.500710   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:31.696745   25542 request.go:629] Waited for 195.346971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.696802   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.696809   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.696819   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.696825   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.701940   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:31.702579   25542 pod_ready.go:92] pod "kube-proxy-4d9w5" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.702597   25542 pod_ready.go:81] duration metric: took 327.998436ms for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.702606   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.896850   25542 request.go:629] Waited for 194.166571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 10:59:31.896920   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 10:59:31.896927   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.896937   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.896944   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.900546   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.096495   25542 request.go:629] Waited for 195.389074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.096572   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.096579   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.096589   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.096598   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.100482   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.101116   25542 pod_ready.go:92] pod "kube-proxy-q2xfn" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.101134   25542 pod_ready.go:81] duration metric: took 398.517707ms for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.101143   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.296718   25542 request.go:629] Waited for 195.519197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 10:59:32.296800   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 10:59:32.296808   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.296816   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.296819   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.300389   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.496149   25542 request.go:629] Waited for 195.276948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:32.496209   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:32.496214   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.496221   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.496228   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.499362   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.500060   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.500079   25542 pod_ready.go:81] duration metric: took 398.928589ms for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.500089   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.696071   25542 request.go:629] Waited for 195.918544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 10:59:32.696124   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 10:59:32.696129   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.696143   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.696162   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.698879   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:32.896789   25542 request.go:629] Waited for 197.360174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.896868   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.896873   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.896880   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.896884   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.900609   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.901147   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.901165   25542 pod_ready.go:81] duration metric: took 401.068545ms for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.901174   25542 pod_ready.go:38] duration metric: took 9.601397971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:59:32.901187   25542 api_server.go:52] waiting for apiserver process to appear ...
	I0603 10:59:32.901243   25542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 10:59:32.916356   25542 api_server.go:72] duration metric: took 18.444296631s to wait for apiserver process to appear ...
	I0603 10:59:32.916376   25542 api_server.go:88] waiting for apiserver healthz status ...
	I0603 10:59:32.916395   25542 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0603 10:59:32.922078   25542 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0603 10:59:32.922151   25542 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0603 10:59:32.922162   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.922169   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.922175   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.922893   25542 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 10:59:32.922973   25542 api_server.go:141] control plane version: v1.30.1
	I0603 10:59:32.922987   25542 api_server.go:131] duration metric: took 6.604807ms to wait for apiserver health ...
	I0603 10:59:32.922995   25542 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 10:59:33.096395   25542 request.go:629] Waited for 173.338882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.096465   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.096473   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.096480   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.096485   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.105526   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 10:59:33.112685   25542 system_pods.go:59] 17 kube-system pods found
	I0603 10:59:33.112712   25542 system_pods.go:61] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 10:59:33.112717   25542 system_pods.go:61] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 10:59:33.112721   25542 system_pods.go:61] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 10:59:33.112724   25542 system_pods.go:61] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 10:59:33.112727   25542 system_pods.go:61] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 10:59:33.112731   25542 system_pods.go:61] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 10:59:33.112736   25542 system_pods.go:61] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 10:59:33.112741   25542 system_pods.go:61] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 10:59:33.112745   25542 system_pods.go:61] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 10:59:33.112755   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 10:59:33.112760   25542 system_pods.go:61] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 10:59:33.112768   25542 system_pods.go:61] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 10:59:33.112773   25542 system_pods.go:61] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 10:59:33.112779   25542 system_pods.go:61] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 10:59:33.112783   25542 system_pods.go:61] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 10:59:33.112790   25542 system_pods.go:61] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 10:59:33.112793   25542 system_pods.go:61] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 10:59:33.112798   25542 system_pods.go:74] duration metric: took 189.797613ms to wait for pod list to return data ...
	I0603 10:59:33.112808   25542 default_sa.go:34] waiting for default service account to be created ...
	I0603 10:59:33.296188   25542 request.go:629] Waited for 183.314921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 10:59:33.296246   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 10:59:33.296252   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.296259   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.296263   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.299696   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:33.299902   25542 default_sa.go:45] found service account: "default"
	I0603 10:59:33.299918   25542 default_sa.go:55] duration metric: took 187.10456ms for default service account to be created ...
	I0603 10:59:33.299926   25542 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 10:59:33.496401   25542 request.go:629] Waited for 196.414711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.496476   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.496484   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.496493   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.496503   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.501752   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:33.506723   25542 system_pods.go:86] 17 kube-system pods found
	I0603 10:59:33.506744   25542 system_pods.go:89] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 10:59:33.506750   25542 system_pods.go:89] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 10:59:33.506754   25542 system_pods.go:89] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 10:59:33.506758   25542 system_pods.go:89] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 10:59:33.506762   25542 system_pods.go:89] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 10:59:33.506766   25542 system_pods.go:89] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 10:59:33.506770   25542 system_pods.go:89] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 10:59:33.506774   25542 system_pods.go:89] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 10:59:33.506778   25542 system_pods.go:89] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 10:59:33.506783   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 10:59:33.506790   25542 system_pods.go:89] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 10:59:33.506793   25542 system_pods.go:89] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 10:59:33.506800   25542 system_pods.go:89] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 10:59:33.506804   25542 system_pods.go:89] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 10:59:33.506808   25542 system_pods.go:89] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 10:59:33.506812   25542 system_pods.go:89] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 10:59:33.506818   25542 system_pods.go:89] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 10:59:33.506824   25542 system_pods.go:126] duration metric: took 206.893332ms to wait for k8s-apps to be running ...
	I0603 10:59:33.506833   25542 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 10:59:33.506874   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:59:33.522011   25542 system_svc.go:56] duration metric: took 15.172648ms WaitForService to wait for kubelet
	I0603 10:59:33.522034   25542 kubeadm.go:576] duration metric: took 19.049980276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:59:33.522051   25542 node_conditions.go:102] verifying NodePressure condition ...
	I0603 10:59:33.696426   25542 request.go:629] Waited for 174.313958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0603 10:59:33.696475   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0603 10:59:33.696482   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.696491   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.696498   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.699582   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:33.700490   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:59:33.700515   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 10:59:33.700528   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:59:33.700534   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 10:59:33.700540   25542 node_conditions.go:105] duration metric: took 178.484212ms to run NodePressure ...
	I0603 10:59:33.700555   25542 start.go:240] waiting for startup goroutines ...
	I0603 10:59:33.700584   25542 start.go:254] writing updated cluster config ...
	I0603 10:59:33.702645   25542 out.go:177] 
	I0603 10:59:33.704059   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:33.704171   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:59:33.705860   25542 out.go:177] * Starting "ha-683480-m03" control-plane node in "ha-683480" cluster
	I0603 10:59:33.706933   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:59:33.706955   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:59:33.707063   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:59:33.707077   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:59:33.707166   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:59:33.707319   25542 start.go:360] acquireMachinesLock for ha-683480-m03: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:59:33.707367   25542 start.go:364] duration metric: took 30.727µs to acquireMachinesLock for "ha-683480-m03"
	I0603 10:59:33.707384   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:59:33.707462   25542 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0603 10:59:33.709131   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:59:33.709200   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:59:33.709230   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:59:33.723856   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
	I0603 10:59:33.724226   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:59:33.724678   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:59:33.724698   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:59:33.724986   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:59:33.725174   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:33.725311   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:33.725473   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:59:33.725503   25542 client.go:168] LocalClient.Create starting
	I0603 10:59:33.725540   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:59:33.725581   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:59:33.725603   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:59:33.725673   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:59:33.725701   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:59:33.725715   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:59:33.725741   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:59:33.725750   25542 main.go:141] libmachine: (ha-683480-m03) Calling .PreCreateCheck
	I0603 10:59:33.725911   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 10:59:33.726288   25542 main.go:141] libmachine: Creating machine...
	I0603 10:59:33.726302   25542 main.go:141] libmachine: (ha-683480-m03) Calling .Create
	I0603 10:59:33.726418   25542 main.go:141] libmachine: (ha-683480-m03) Creating KVM machine...
	I0603 10:59:33.727558   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found existing default KVM network
	I0603 10:59:33.727701   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found existing private KVM network mk-ha-683480
	I0603 10:59:33.727806   25542 main.go:141] libmachine: (ha-683480-m03) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 ...
	I0603 10:59:33.727829   25542 main.go:141] libmachine: (ha-683480-m03) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:59:33.727889   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:33.727795   26612 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:59:33.727994   25542 main.go:141] libmachine: (ha-683480-m03) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:59:33.940122   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:33.939987   26612 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa...
	I0603 10:59:34.047316   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:34.047212   26612 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/ha-683480-m03.rawdisk...
	I0603 10:59:34.047349   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Writing magic tar header
	I0603 10:59:34.047365   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Writing SSH key tar header
	I0603 10:59:34.047377   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:34.047339   26612 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 ...
	I0603 10:59:34.047477   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03
	I0603 10:59:34.047502   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 (perms=drwx------)
	I0603 10:59:34.047514   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:59:34.047526   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:59:34.047543   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:59:34.047558   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:59:34.047570   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:59:34.047584   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:59:34.047596   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:59:34.047606   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:59:34.047650   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:59:34.047675   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:59:34.047688   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home
	I0603 10:59:34.047707   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Skipping /home - not owner
	I0603 10:59:34.047722   25542 main.go:141] libmachine: (ha-683480-m03) Creating domain...
	I0603 10:59:34.048481   25542 main.go:141] libmachine: (ha-683480-m03) define libvirt domain using xml: 
	I0603 10:59:34.048503   25542 main.go:141] libmachine: (ha-683480-m03) <domain type='kvm'>
	I0603 10:59:34.048513   25542 main.go:141] libmachine: (ha-683480-m03)   <name>ha-683480-m03</name>
	I0603 10:59:34.048520   25542 main.go:141] libmachine: (ha-683480-m03)   <memory unit='MiB'>2200</memory>
	I0603 10:59:34.048532   25542 main.go:141] libmachine: (ha-683480-m03)   <vcpu>2</vcpu>
	I0603 10:59:34.048543   25542 main.go:141] libmachine: (ha-683480-m03)   <features>
	I0603 10:59:34.048551   25542 main.go:141] libmachine: (ha-683480-m03)     <acpi/>
	I0603 10:59:34.048561   25542 main.go:141] libmachine: (ha-683480-m03)     <apic/>
	I0603 10:59:34.048571   25542 main.go:141] libmachine: (ha-683480-m03)     <pae/>
	I0603 10:59:34.048581   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.048609   25542 main.go:141] libmachine: (ha-683480-m03)   </features>
	I0603 10:59:34.048633   25542 main.go:141] libmachine: (ha-683480-m03)   <cpu mode='host-passthrough'>
	I0603 10:59:34.048644   25542 main.go:141] libmachine: (ha-683480-m03)   
	I0603 10:59:34.048654   25542 main.go:141] libmachine: (ha-683480-m03)   </cpu>
	I0603 10:59:34.048664   25542 main.go:141] libmachine: (ha-683480-m03)   <os>
	I0603 10:59:34.048669   25542 main.go:141] libmachine: (ha-683480-m03)     <type>hvm</type>
	I0603 10:59:34.048677   25542 main.go:141] libmachine: (ha-683480-m03)     <boot dev='cdrom'/>
	I0603 10:59:34.048683   25542 main.go:141] libmachine: (ha-683480-m03)     <boot dev='hd'/>
	I0603 10:59:34.048692   25542 main.go:141] libmachine: (ha-683480-m03)     <bootmenu enable='no'/>
	I0603 10:59:34.048703   25542 main.go:141] libmachine: (ha-683480-m03)   </os>
	I0603 10:59:34.048737   25542 main.go:141] libmachine: (ha-683480-m03)   <devices>
	I0603 10:59:34.048754   25542 main.go:141] libmachine: (ha-683480-m03)     <disk type='file' device='cdrom'>
	I0603 10:59:34.048763   25542 main.go:141] libmachine: (ha-683480-m03)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/boot2docker.iso'/>
	I0603 10:59:34.048775   25542 main.go:141] libmachine: (ha-683480-m03)       <target dev='hdc' bus='scsi'/>
	I0603 10:59:34.048790   25542 main.go:141] libmachine: (ha-683480-m03)       <readonly/>
	I0603 10:59:34.048801   25542 main.go:141] libmachine: (ha-683480-m03)     </disk>
	I0603 10:59:34.048814   25542 main.go:141] libmachine: (ha-683480-m03)     <disk type='file' device='disk'>
	I0603 10:59:34.048831   25542 main.go:141] libmachine: (ha-683480-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:59:34.048847   25542 main.go:141] libmachine: (ha-683480-m03)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/ha-683480-m03.rawdisk'/>
	I0603 10:59:34.048857   25542 main.go:141] libmachine: (ha-683480-m03)       <target dev='hda' bus='virtio'/>
	I0603 10:59:34.048866   25542 main.go:141] libmachine: (ha-683480-m03)     </disk>
	I0603 10:59:34.048876   25542 main.go:141] libmachine: (ha-683480-m03)     <interface type='network'>
	I0603 10:59:34.048886   25542 main.go:141] libmachine: (ha-683480-m03)       <source network='mk-ha-683480'/>
	I0603 10:59:34.048894   25542 main.go:141] libmachine: (ha-683480-m03)       <model type='virtio'/>
	I0603 10:59:34.048903   25542 main.go:141] libmachine: (ha-683480-m03)     </interface>
	I0603 10:59:34.048913   25542 main.go:141] libmachine: (ha-683480-m03)     <interface type='network'>
	I0603 10:59:34.048921   25542 main.go:141] libmachine: (ha-683480-m03)       <source network='default'/>
	I0603 10:59:34.048931   25542 main.go:141] libmachine: (ha-683480-m03)       <model type='virtio'/>
	I0603 10:59:34.048943   25542 main.go:141] libmachine: (ha-683480-m03)     </interface>
	I0603 10:59:34.048952   25542 main.go:141] libmachine: (ha-683480-m03)     <serial type='pty'>
	I0603 10:59:34.048970   25542 main.go:141] libmachine: (ha-683480-m03)       <target port='0'/>
	I0603 10:59:34.048983   25542 main.go:141] libmachine: (ha-683480-m03)     </serial>
	I0603 10:59:34.048996   25542 main.go:141] libmachine: (ha-683480-m03)     <console type='pty'>
	I0603 10:59:34.049007   25542 main.go:141] libmachine: (ha-683480-m03)       <target type='serial' port='0'/>
	I0603 10:59:34.049018   25542 main.go:141] libmachine: (ha-683480-m03)     </console>
	I0603 10:59:34.049028   25542 main.go:141] libmachine: (ha-683480-m03)     <rng model='virtio'>
	I0603 10:59:34.049038   25542 main.go:141] libmachine: (ha-683480-m03)       <backend model='random'>/dev/random</backend>
	I0603 10:59:34.049048   25542 main.go:141] libmachine: (ha-683480-m03)     </rng>
	I0603 10:59:34.049064   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.049079   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.049090   25542 main.go:141] libmachine: (ha-683480-m03)   </devices>
	I0603 10:59:34.049101   25542 main.go:141] libmachine: (ha-683480-m03) </domain>
	I0603 10:59:34.049110   25542 main.go:141] libmachine: (ha-683480-m03) 
	I0603 10:59:34.055631   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:e4:91:52 in network default
	I0603 10:59:34.056194   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring networks are active...
	I0603 10:59:34.056219   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:34.056816   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring network default is active
	I0603 10:59:34.057061   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring network mk-ha-683480 is active
	I0603 10:59:34.057457   25542 main.go:141] libmachine: (ha-683480-m03) Getting domain xml...
	I0603 10:59:34.058139   25542 main.go:141] libmachine: (ha-683480-m03) Creating domain...
	I0603 10:59:35.261242   25542 main.go:141] libmachine: (ha-683480-m03) Waiting to get IP...
	I0603 10:59:35.262263   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.262686   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.262723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.262671   26612 retry.go:31] will retry after 270.466843ms: waiting for machine to come up
	I0603 10:59:35.535155   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.535612   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.535640   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.535568   26612 retry.go:31] will retry after 381.295501ms: waiting for machine to come up
	I0603 10:59:35.918833   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.919263   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.919291   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.919226   26612 retry.go:31] will retry after 451.72106ms: waiting for machine to come up
	I0603 10:59:36.372620   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:36.373072   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:36.373095   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:36.373006   26612 retry.go:31] will retry after 446.571176ms: waiting for machine to come up
	I0603 10:59:36.821784   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:36.822324   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:36.822351   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:36.822274   26612 retry.go:31] will retry after 548.14234ms: waiting for machine to come up
	I0603 10:59:37.372079   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:37.372590   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:37.372616   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:37.372560   26612 retry.go:31] will retry after 733.157294ms: waiting for machine to come up
	I0603 10:59:38.106737   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:38.107283   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:38.107308   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:38.107228   26612 retry.go:31] will retry after 996.093829ms: waiting for machine to come up
	I0603 10:59:39.104880   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:39.105289   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:39.105319   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:39.105242   26612 retry.go:31] will retry after 1.256688018s: waiting for machine to come up
	I0603 10:59:40.363723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:40.364093   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:40.364122   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:40.364047   26612 retry.go:31] will retry after 1.306062946s: waiting for machine to come up
	I0603 10:59:41.672597   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:41.673027   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:41.673048   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:41.672986   26612 retry.go:31] will retry after 1.417549296s: waiting for machine to come up
	I0603 10:59:43.092276   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:43.092749   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:43.092770   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:43.092710   26612 retry.go:31] will retry after 1.859144814s: waiting for machine to come up
	I0603 10:59:44.952836   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:44.953234   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:44.953292   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:44.953206   26612 retry.go:31] will retry after 2.82862903s: waiting for machine to come up
	I0603 10:59:47.785131   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:47.785582   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:47.785609   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:47.785528   26612 retry.go:31] will retry after 2.808798994s: waiting for machine to come up
	I0603 10:59:50.596197   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:50.596659   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:50.596679   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:50.596618   26612 retry.go:31] will retry after 5.066420706s: waiting for machine to come up
	I0603 10:59:55.665614   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.666014   25542 main.go:141] libmachine: (ha-683480-m03) Found IP for machine: 192.168.39.131
	I0603 10:59:55.666043   25542 main.go:141] libmachine: (ha-683480-m03) Reserving static IP address...
	I0603 10:59:55.666058   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has current primary IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.666974   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find host DHCP lease matching {name: "ha-683480-m03", mac: "52:54:00:b4:3e:89", ip: "192.168.39.131"} in network mk-ha-683480
	I0603 10:59:55.738251   25542 main.go:141] libmachine: (ha-683480-m03) Reserved static IP address: 192.168.39.131
	I0603 10:59:55.738282   25542 main.go:141] libmachine: (ha-683480-m03) Waiting for SSH to be available...
	I0603 10:59:55.738292   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Getting to WaitForSSH function...
	I0603 10:59:55.740966   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.741387   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480
	I0603 10:59:55.741413   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find defined IP address of network mk-ha-683480 interface with MAC address 52:54:00:b4:3e:89
	I0603 10:59:55.741608   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH client type: external
	I0603 10:59:55.741640   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa (-rw-------)
	I0603 10:59:55.741701   25542 main.go:141] libmachine: (ha-683480-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:59:55.741723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | About to run SSH command:
	I0603 10:59:55.741738   25542 main.go:141] libmachine: (ha-683480-m03) DBG | exit 0
	I0603 10:59:55.745088   25542 main.go:141] libmachine: (ha-683480-m03) DBG | SSH cmd err, output: exit status 255: 
	I0603 10:59:55.745119   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0603 10:59:55.745135   25542 main.go:141] libmachine: (ha-683480-m03) DBG | command : exit 0
	I0603 10:59:55.745149   25542 main.go:141] libmachine: (ha-683480-m03) DBG | err     : exit status 255
	I0603 10:59:55.745176   25542 main.go:141] libmachine: (ha-683480-m03) DBG | output  : 
	I0603 10:59:58.745558   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Getting to WaitForSSH function...
	I0603 10:59:58.747816   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.748193   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.748219   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.748352   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH client type: external
	I0603 10:59:58.748371   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa (-rw-------)
	I0603 10:59:58.748402   25542 main.go:141] libmachine: (ha-683480-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:59:58.748414   25542 main.go:141] libmachine: (ha-683480-m03) DBG | About to run SSH command:
	I0603 10:59:58.748429   25542 main.go:141] libmachine: (ha-683480-m03) DBG | exit 0
	I0603 10:59:58.871308   25542 main.go:141] libmachine: (ha-683480-m03) DBG | SSH cmd err, output: <nil>: 
	I0603 10:59:58.871548   25542 main.go:141] libmachine: (ha-683480-m03) KVM machine creation complete!
	I0603 10:59:58.871914   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 10:59:58.872491   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:58.872654   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:58.872778   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:59:58.872790   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 10:59:58.873878   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:59:58.873893   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:59:58.873900   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:59:58.873909   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:58.876164   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.876567   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.876593   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.876707   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:58.876840   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.876955   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.877109   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:58.877293   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:58.877530   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:58.877548   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:59:58.978478   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:59:58.978505   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:59:58.978515   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:58.981143   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.981453   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.981478   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.981604   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:58.981783   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.981963   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.982113   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:58.982264   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:58.982441   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:58.982454   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:59:59.084016   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:59:59.084065   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:59:59.084072   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:59:59.084078   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.084325   25542 buildroot.go:166] provisioning hostname "ha-683480-m03"
	I0603 10:59:59.084352   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.084547   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.087209   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.087572   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.087598   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.087717   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.087880   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.088037   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.088172   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.088313   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.088464   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.088475   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480-m03 && echo "ha-683480-m03" | sudo tee /etc/hostname
	I0603 10:59:59.207156   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480-m03
	
	I0603 10:59:59.207191   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.209845   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.210188   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.210211   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.210324   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.210508   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.210668   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.210837   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.211033   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.211233   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.211257   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:59:59.324738   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:59:59.324769   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:59:59.324787   25542 buildroot.go:174] setting up certificates
	I0603 10:59:59.324796   25542 provision.go:84] configureAuth start
	I0603 10:59:59.324804   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.325081   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 10:59:59.327591   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.327970   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.327996   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.328103   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.330395   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.330794   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.330813   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.330950   25542 provision.go:143] copyHostCerts
	I0603 10:59:59.330982   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:59:59.331013   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:59:59.331022   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:59:59.331112   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:59:59.331193   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:59:59.331212   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:59:59.331219   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:59:59.331243   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:59:59.331285   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:59:59.331306   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:59:59.331312   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:59:59.331332   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:59:59.331379   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480-m03 san=[127.0.0.1 192.168.39.131 ha-683480-m03 localhost minikube]
	I0603 10:59:59.723359   25542 provision.go:177] copyRemoteCerts
	I0603 10:59:59.723413   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:59:59.723433   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.725988   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.726378   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.726403   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.726576   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.726745   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.726907   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.727015   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 10:59:59.809634   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:59:59.809715   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:59:59.836168   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:59:59.836228   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:59:59.861712   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:59:59.861786   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 10:59:59.885637   25542 provision.go:87] duration metric: took 560.829366ms to configureAuth
	I0603 10:59:59.885664   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:59:59.885915   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:59.886004   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.888576   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.888923   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.888955   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.889067   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.889274   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.889408   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.889572   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.889724   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.889869   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.889883   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:00:00.154556   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:00:00.154581   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 11:00:00.154591   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetURL
	I0603 11:00:00.156021   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using libvirt version 6000000
	I0603 11:00:00.158619   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.158977   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.158999   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.159212   25542 main.go:141] libmachine: Docker is up and running!
	I0603 11:00:00.159232   25542 main.go:141] libmachine: Reticulating splines...
	I0603 11:00:00.159240   25542 client.go:171] duration metric: took 26.433726692s to LocalClient.Create
	I0603 11:00:00.159264   25542 start.go:167] duration metric: took 26.433791309s to libmachine.API.Create "ha-683480"
	I0603 11:00:00.159275   25542 start.go:293] postStartSetup for "ha-683480-m03" (driver="kvm2")
	I0603 11:00:00.159288   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:00:00.159309   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.159544   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:00:00.159573   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.161457   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.161799   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.161827   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.161923   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.162096   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.162219   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.162362   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.241253   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:00:00.245306   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:00:00.245333   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:00:00.245408   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:00:00.245513   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:00:00.245528   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:00:00.245610   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:00:00.254388   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:00:00.278002   25542 start.go:296] duration metric: took 118.713832ms for postStartSetup
	I0603 11:00:00.278046   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 11:00:00.278576   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:00.281105   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.281438   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.281475   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.281700   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:00:00.281881   25542 start.go:128] duration metric: took 26.574409175s to createHost
	I0603 11:00:00.281903   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.284180   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.284481   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.284502   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.284649   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.284807   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.284967   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.285136   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.285287   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 11:00:00.285449   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 11:00:00.285459   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:00:00.387867   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412400.367020560
	
	I0603 11:00:00.387894   25542 fix.go:216] guest clock: 1717412400.367020560
	I0603 11:00:00.387901   25542 fix.go:229] Guest: 2024-06-03 11:00:00.36702056 +0000 UTC Remote: 2024-06-03 11:00:00.281892535 +0000 UTC m=+225.848531606 (delta=85.128025ms)
	I0603 11:00:00.387917   25542 fix.go:200] guest clock delta is within tolerance: 85.128025ms
	I0603 11:00:00.387923   25542 start.go:83] releasing machines lock for "ha-683480-m03", held for 26.680546435s
	I0603 11:00:00.387947   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.388257   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:00.390864   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.391267   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.391302   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.393497   25542 out.go:177] * Found network options:
	I0603 11:00:00.394769   25542 out.go:177]   - NO_PROXY=192.168.39.116,192.168.39.127
	W0603 11:00:00.395992   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 11:00:00.396013   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 11:00:00.396024   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396473   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396641   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396727   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:00:00.396773   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	W0603 11:00:00.396844   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 11:00:00.396874   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 11:00:00.396938   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:00:00.396970   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.399626   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.399862   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400050   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.400103   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400203   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.400283   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.400317   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400404   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.400488   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.400566   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.400654   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.400720   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.400798   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.400930   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.634050   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:00:00.640544   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:00:00.640594   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:00:00.660214   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 11:00:00.660234   25542 start.go:494] detecting cgroup driver to use...
	I0603 11:00:00.660291   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:00:00.679677   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:00:00.694096   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:00:00.694140   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:00:00.708912   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:00:00.723483   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:00:00.853589   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:00:01.030892   25542 docker.go:233] disabling docker service ...
	I0603 11:00:01.030950   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:00:01.048354   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:00:01.063783   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:00:01.201067   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:00:01.331372   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:00:01.346784   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:00:01.367080   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:00:01.367154   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.379422   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:00:01.379477   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.390936   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.402407   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.415123   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:00:01.426922   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.438527   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.457041   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.467851   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:00:01.478276   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 11:00:01.478340   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 11:00:01.491724   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:00:01.503268   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:01.627729   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:00:01.776425   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:00:01.776507   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:00:01.781948   25542 start.go:562] Will wait 60s for crictl version
	I0603 11:00:01.782020   25542 ssh_runner.go:195] Run: which crictl
	I0603 11:00:01.786363   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:00:01.833252   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:00:01.833321   25542 ssh_runner.go:195] Run: crio --version
	I0603 11:00:01.865004   25542 ssh_runner.go:195] Run: crio --version
	I0603 11:00:01.896736   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:00:01.898152   25542 out.go:177]   - env NO_PROXY=192.168.39.116
	I0603 11:00:01.899339   25542 out.go:177]   - env NO_PROXY=192.168.39.116,192.168.39.127
	I0603 11:00:01.900492   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:01.903408   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:01.903798   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:01.903825   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:01.904054   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:00:01.908582   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:00:01.922194   25542 mustload.go:65] Loading cluster: ha-683480
	I0603 11:00:01.922447   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:00:01.922753   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:01.922801   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:01.938001   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0603 11:00:01.938500   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:01.939076   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:01.939106   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:01.939464   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:01.939695   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:00:01.941861   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:00:01.942174   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:01.942211   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:01.956848   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0603 11:00:01.957291   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:01.957789   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:01.957813   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:01.958115   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:01.958351   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:00:01.958509   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.131
	I0603 11:00:01.958522   25542 certs.go:194] generating shared ca certs ...
	I0603 11:00:01.958539   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:01.958703   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:00:01.958756   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:00:01.958769   25542 certs.go:256] generating profile certs ...
	I0603 11:00:01.958866   25542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:00:01.958894   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca
	I0603 11:00:01.958911   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:00:02.105324   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca ...
	I0603 11:00:02.105364   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca: {Name:mk778848a80dabf777f38206c994e23913ed3dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:02.105540   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca ...
	I0603 11:00:02.105558   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca: {Name:mkb9d2a175e2da763483deea8d48749d46669645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:02.105651   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:00:02.105801   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:00:02.105969   25542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:00:02.105992   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:00:02.106012   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:00:02.106028   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:00:02.106043   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:00:02.106057   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:00:02.106074   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:00:02.106088   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:00:02.106102   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:00:02.106165   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:00:02.106200   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:00:02.106209   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:00:02.106229   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:00:02.106250   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:00:02.106270   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:00:02.106303   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:00:02.106328   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.106342   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.106358   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.106389   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:00:02.109903   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:02.110348   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:00:02.110374   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:02.110585   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:00:02.110835   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:00:02.111090   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:00:02.111261   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:00:02.183533   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 11:00:02.188879   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 11:00:02.203833   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 11:00:02.208388   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 11:00:02.222730   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 11:00:02.227497   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 11:00:02.239251   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 11:00:02.244412   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 11:00:02.255579   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 11:00:02.260003   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 11:00:02.272635   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 11:00:02.277990   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 11:00:02.290408   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:00:02.318961   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:00:02.346126   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:00:02.372570   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:00:02.401050   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 11:00:02.429414   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:00:02.456299   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:00:02.482602   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:00:02.510741   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:00:02.537472   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:00:02.564419   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:00:02.591284   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 11:00:02.610135   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 11:00:02.629305   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 11:00:02.647719   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 11:00:02.665725   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 11:00:02.685001   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 11:00:02.704175   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 11:00:02.722580   25542 ssh_runner.go:195] Run: openssl version
	I0603 11:00:02.729329   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:00:02.742109   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.747399   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.747472   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.754031   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:00:02.767182   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:00:02.781295   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.787458   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.787525   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.794411   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:00:02.809009   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:00:02.822291   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.827800   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.827869   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.835606   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:00:02.849870   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:00:02.854793   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 11:00:02.854844   25542 kubeadm.go:928] updating node {m03 192.168.39.131 8443 v1.30.1 crio true true} ...
	I0603 11:00:02.854935   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:00:02.854971   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 11:00:02.855009   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:00:02.876621   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:00:02.876697   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:00:02.876750   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:00:02.889751   25542 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 11:00:02.889803   25542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 11:00:02.902056   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 11:00:02.902087   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 11:00:02.902139   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 11:00:02.902157   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 11:00:02.902166   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 11:00:02.902063   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 11:00:02.902235   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 11:00:02.902238   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:00:02.919491   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 11:00:02.919560   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 11:00:02.919602   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 11:00:02.919644   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 11:00:02.919605   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 11:00:02.919674   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 11:00:02.942527   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 11:00:02.942564   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 11:00:03.978032   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 11:00:03.988337   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 11:00:04.008747   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:00:04.028226   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:00:04.047517   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:00:04.051996   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:00:04.067432   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:04.198882   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:00:04.217113   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:00:04.217581   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:04.217640   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:04.234040   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0603 11:00:04.234511   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:04.235158   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:04.235188   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:04.235582   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:04.235796   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:00:04.235958   25542 start.go:316] joinCluster: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:00:04.236121   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 11:00:04.236140   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:00:04.239417   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:04.239878   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:00:04.239905   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:04.240115   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:00:04.240323   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:00:04.240468   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:00:04.240700   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:00:04.395209   25542 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:00:04.395250   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9gjcl5.cq7m0hgvprwevy8u --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m03 --control-plane --apiserver-advertise-address=192.168.39.131 --apiserver-bind-port=8443"
	I0603 11:00:28.714875   25542 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9gjcl5.cq7m0hgvprwevy8u --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m03 --control-plane --apiserver-advertise-address=192.168.39.131 --apiserver-bind-port=8443": (24.319601388s)
	I0603 11:00:28.714909   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 11:00:29.352061   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480-m03 minikube.k8s.io/updated_at=2024_06_03T11_00_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=false
	I0603 11:00:29.480343   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683480-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 11:00:29.587008   25542 start.go:318] duration metric: took 25.351046222s to joinCluster
	I0603 11:00:29.587111   25542 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:00:29.588268   25542 out.go:177] * Verifying Kubernetes components...
	I0603 11:00:29.587489   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:00:29.589431   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:29.834362   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:00:29.880657   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:00:29.880998   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 11:00:29.881080   25542 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.116:8443
	I0603 11:00:29.881335   25542 node_ready.go:35] waiting up to 6m0s for node "ha-683480-m03" to be "Ready" ...
	I0603 11:00:29.881424   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:29.881434   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:29.881446   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:29.881454   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:29.885455   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:30.382440   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:30.382464   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:30.382476   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:30.382482   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:30.385728   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:30.881629   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:30.881649   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:30.881664   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:30.881669   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:30.885779   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:31.382390   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:31.382414   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:31.382423   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:31.382430   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:31.385942   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:31.882160   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:31.882183   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:31.882191   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:31.882195   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:31.885181   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:31.886013   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:32.381540   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:32.381562   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:32.381570   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:32.381574   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:32.384833   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:32.881737   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:32.881811   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:32.881826   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:32.881831   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:32.885145   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.382335   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:33.382356   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:33.382363   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:33.382369   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:33.385480   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.882454   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:33.882488   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:33.882500   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:33.882507   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:33.886449   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.887171   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:34.381993   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:34.382022   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:34.382034   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:34.382041   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:34.385106   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:34.882403   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:34.882426   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:34.882433   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:34.882438   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:34.886934   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:35.382506   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:35.382535   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:35.382544   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:35.382548   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:35.385963   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:35.881616   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:35.881637   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:35.881645   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:35.881650   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:35.884867   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.381572   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:36.381595   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.381602   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.381607   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.384871   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.385932   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:36.882281   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:36.882304   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.882314   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.882319   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.885367   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.886083   25542 node_ready.go:49] node "ha-683480-m03" has status "Ready":"True"
	I0603 11:00:36.886108   25542 node_ready.go:38] duration metric: took 7.004756506s for node "ha-683480-m03" to be "Ready" ...
	I0603 11:00:36.886120   25542 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:00:36.886192   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:36.886204   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.886211   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.886216   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.892662   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 11:00:36.899322   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.899389   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8tqf9
	I0603 11:00:36.899398   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.899405   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.899410   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.901816   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.902531   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.902545   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.902551   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.902554   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.905221   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.906202   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.906225   25542 pod_ready.go:81] duration metric: took 6.88162ms for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.906257   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.906339   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nff86
	I0603 11:00:36.906349   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.906359   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.906367   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.916283   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 11:00:36.917407   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.917426   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.917453   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.917459   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.921564   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:36.922227   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.922247   25542 pod_ready.go:81] duration metric: took 15.976604ms for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.922260   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.922331   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480
	I0603 11:00:36.922342   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.922351   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.922360   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.924622   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.924986   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.924998   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.925005   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.925009   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.927602   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.928048   25542 pod_ready.go:92] pod "etcd-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.928062   25542 pod_ready.go:81] duration metric: took 5.791678ms for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.928071   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.928113   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 11:00:36.928120   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.928127   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.928131   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.930214   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.930749   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:36.930762   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.930772   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.930776   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.933236   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.933907   25542 pod_ready.go:92] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.933926   25542 pod_ready.go:81] duration metric: took 5.847458ms for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.933937   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:37.082647   25542 request.go:629] Waited for 148.638529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.082736   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.082747   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.082757   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.082768   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.086221   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.283114   25542 request.go:629] Waited for 196.19485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.283249   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.283274   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.283300   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.283320   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.286391   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.482631   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.482660   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.482670   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.482674   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.486452   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.682342   25542 request.go:629] Waited for 195.29431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.682402   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.682409   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.682419   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.682429   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.685659   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.934462   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.934504   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.934512   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.934516   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.937491   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.082678   25542 request.go:629] Waited for 144.315129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.082742   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.082749   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.082766   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.082780   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.086484   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.434304   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:38.434325   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.434332   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.434338   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.437395   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.482863   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.482882   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.482891   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.482894   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.486027   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.934180   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:38.934200   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.934208   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.934214   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.936951   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.937790   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.937807   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.937814   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.937819   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.940485   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.941085   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:39.434821   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:39.434843   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.434851   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.434855   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.437945   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:39.438609   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:39.438626   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.438634   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.438638   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.441343   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:39.934932   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:39.934959   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.934971   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.934977   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.938788   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:39.939577   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:39.939597   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.939607   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.939615   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.942619   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.434548   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:40.434569   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.434577   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.434581   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.439028   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:40.440125   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:40.440139   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.440147   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.440153   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.443208   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:40.934180   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:40.934207   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.934218   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.934222   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.937101   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.937978   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:40.937994   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.938001   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.938005   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.940621   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.941206   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:41.434528   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:41.434550   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.434556   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.434559   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.437652   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:41.438502   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:41.438517   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.438529   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.438534   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.441369   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:41.934309   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:41.934342   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.934352   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.934359   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.937851   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:41.938455   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:41.938469   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.938477   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.938480   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.941001   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:42.435155   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:42.435176   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.435183   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.435189   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.438753   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.439698   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:42.439717   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.439728   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.439733   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.442809   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.934832   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:42.934857   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.934865   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.934870   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.938157   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.938799   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:42.938815   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.938822   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.938826   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.941943   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.942703   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:43.435052   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:43.435085   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.435094   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.435097   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.438623   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.439364   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:43.439382   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.439390   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.439395   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.442042   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.934720   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:43.934746   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.934758   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.934764   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.938369   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.939300   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:43.939321   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.939332   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.939336   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.942327   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.942964   25542 pod_ready.go:92] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.942987   25542 pod_ready.go:81] duration metric: took 7.009042425s for pod "etcd-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.943008   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.943098   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480
	I0603 11:00:43.943116   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.943125   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.943134   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.946175   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.946963   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:43.946980   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.946991   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.946998   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.949616   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.950144   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.950164   25542 pod_ready.go:81] duration metric: took 7.145143ms for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.950177   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.950251   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 11:00:43.950263   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.950272   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.950278   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.953199   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.953878   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:43.953891   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.953900   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.953903   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.957194   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.957884   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.957904   25542 pod_ready.go:81] duration metric: took 7.719828ms for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.957913   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.957959   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m03
	I0603 11:00:43.957964   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.957970   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.957977   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.960651   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:44.082513   25542 request.go:629] Waited for 121.256824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:44.082568   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:44.082573   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.082581   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.082587   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.087201   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:44.087735   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.087751   25542 pod_ready.go:81] duration metric: took 129.833053ms for pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.087762   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.283295   25542 request.go:629] Waited for 195.455954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 11:00:44.283359   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 11:00:44.283366   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.283374   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.283382   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.286807   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:44.482312   25542 request.go:629] Waited for 194.668894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:44.482357   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:44.482361   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.482367   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.482370   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.485359   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:44.485909   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.485925   25542 pod_ready.go:81] duration metric: took 398.155773ms for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.485934   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.682553   25542 request.go:629] Waited for 196.533881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 11:00:44.682607   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 11:00:44.682611   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.682619   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.682626   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.686715   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:44.882437   25542 request.go:629] Waited for 194.278267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:44.882513   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:44.882518   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.882525   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.882530   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.885825   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:44.886595   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.886621   25542 pod_ready.go:81] duration metric: took 400.67823ms for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.886635   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.082728   25542 request.go:629] Waited for 196.028573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m03
	I0603 11:00:45.082799   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m03
	I0603 11:00:45.082805   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.082812   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.082817   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.087060   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:45.282609   25542 request.go:629] Waited for 194.373002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:45.282696   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:45.282707   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.282719   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.282730   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.286486   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.287062   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:45.287081   25542 pod_ready.go:81] duration metric: took 400.439226ms for pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.287095   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.483111   25542 request.go:629] Waited for 195.940216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 11:00:45.483193   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 11:00:45.483200   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.483211   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.483215   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.486861   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.683148   25542 request.go:629] Waited for 195.324662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:45.683212   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:45.683219   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.683230   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.683244   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.686898   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.687647   25542 pod_ready.go:92] pod "kube-proxy-4d9w5" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:45.687668   25542 pod_ready.go:81] duration metric: took 400.565714ms for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.687677   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.882795   25542 request.go:629] Waited for 195.058548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 11:00:45.882855   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 11:00:45.882873   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.882883   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.882887   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.885950   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.083126   25542 request.go:629] Waited for 196.324598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:46.083193   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:46.083199   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.083204   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.083208   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.086502   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.087224   25542 pod_ready.go:92] pod "kube-proxy-q2xfn" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.087245   25542 pod_ready.go:81] duration metric: took 399.561498ms for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.087258   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-txnhc" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.283174   25542 request.go:629] Waited for 195.853901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txnhc
	I0603 11:00:46.283255   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txnhc
	I0603 11:00:46.283263   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.283271   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.283274   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.286562   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.482604   25542 request.go:629] Waited for 195.36119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:46.482661   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:46.482666   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.482673   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.482681   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.486023   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.486658   25542 pod_ready.go:92] pod "kube-proxy-txnhc" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.486673   25542 pod_ready.go:81] duration metric: took 399.409157ms for pod "kube-proxy-txnhc" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.486683   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.682914   25542 request.go:629] Waited for 196.156761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 11:00:46.682965   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 11:00:46.682970   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.682977   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.682981   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.686588   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.882765   25542 request.go:629] Waited for 195.375308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:46.882881   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:46.882895   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.882903   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.882906   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.886303   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.886864   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.886886   25542 pod_ready.go:81] duration metric: took 400.195281ms for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.886902   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.082609   25542 request.go:629] Waited for 195.622546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 11:00:47.082674   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 11:00:47.082680   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.082687   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.082690   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.086184   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.282485   25542 request.go:629] Waited for 195.325073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:47.282554   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:47.282560   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.282568   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.282572   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.286225   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.286653   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:47.286669   25542 pod_ready.go:81] duration metric: took 399.759758ms for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.286679   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.482793   25542 request.go:629] Waited for 196.036972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m03
	I0603 11:00:47.482847   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m03
	I0603 11:00:47.482852   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.482864   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.482870   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.486451   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.682325   25542 request.go:629] Waited for 195.298985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:47.682414   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:47.682420   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.682427   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.682432   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.686692   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:47.687709   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:47.687731   25542 pod_ready.go:81] duration metric: took 401.045776ms for pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.687742   25542 pod_ready.go:38] duration metric: took 10.801605649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:00:47.687769   25542 api_server.go:52] waiting for apiserver process to appear ...
	I0603 11:00:47.687830   25542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:00:47.706806   25542 api_server.go:72] duration metric: took 18.119656039s to wait for apiserver process to appear ...
	I0603 11:00:47.706833   25542 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:00:47.706854   25542 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0603 11:00:47.714253   25542 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0603 11:00:47.714339   25542 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0603 11:00:47.714351   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.714362   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.714370   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.715321   25542 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 11:00:47.715374   25542 api_server.go:141] control plane version: v1.30.1
	I0603 11:00:47.715387   25542 api_server.go:131] duration metric: took 8.548831ms to wait for apiserver health ...
	I0603 11:00:47.715397   25542 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:00:47.882790   25542 request.go:629] Waited for 167.329748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:47.882876   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:47.882887   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.882897   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.882904   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.891445   25542 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 11:00:47.897563   25542 system_pods.go:59] 24 kube-system pods found
	I0603 11:00:47.897602   25542 system_pods.go:61] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 11:00:47.897607   25542 system_pods.go:61] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 11:00:47.897610   25542 system_pods.go:61] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 11:00:47.897614   25542 system_pods.go:61] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 11:00:47.897616   25542 system_pods.go:61] "etcd-ha-683480-m03" [b508988f-4dad-4a28-89b7-b6c38e27626f] Running
	I0603 11:00:47.897619   25542 system_pods.go:61] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 11:00:47.897622   25542 system_pods.go:61] "kindnet-zsfhr" [ecb7fc1b-cc53-4b58-8e55-9269608f217f] Running
	I0603 11:00:47.897625   25542 system_pods.go:61] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 11:00:47.897627   25542 system_pods.go:61] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 11:00:47.897630   25542 system_pods.go:61] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 11:00:47.897633   25542 system_pods.go:61] "kube-apiserver-ha-683480-m03" [063e6cb5-7f5f-4fa0-a54d-dff4303574da] Running
	I0603 11:00:47.897636   25542 system_pods.go:61] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 11:00:47.897639   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 11:00:47.897643   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m03" [6819bdcb-5dd4-43c8-a9c7-d6970609be77] Running
	I0603 11:00:47.897646   25542 system_pods.go:61] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 11:00:47.897649   25542 system_pods.go:61] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 11:00:47.897651   25542 system_pods.go:61] "kube-proxy-txnhc" [f8fbdd89-d160-4342-94ca-9e049b0e96a8] Running
	I0603 11:00:47.897654   25542 system_pods.go:61] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 11:00:47.897658   25542 system_pods.go:61] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 11:00:47.897660   25542 system_pods.go:61] "kube-scheduler-ha-683480-m03" [be6a6382-a11b-425f-a0bf-551d1254d60a] Running
	I0603 11:00:47.897663   25542 system_pods.go:61] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 11:00:47.897666   25542 system_pods.go:61] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 11:00:47.897669   25542 system_pods.go:61] "kube-vip-ha-683480-m03" [b47cab7c-1c30-4828-a351-699fe4935533] Running
	I0603 11:00:47.897680   25542 system_pods.go:61] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 11:00:47.897685   25542 system_pods.go:74] duration metric: took 182.283499ms to wait for pod list to return data ...
	I0603 11:00:47.897695   25542 default_sa.go:34] waiting for default service account to be created ...
	I0603 11:00:48.083136   25542 request.go:629] Waited for 185.349975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 11:00:48.083200   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 11:00:48.083208   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.083218   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.083226   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.088385   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 11:00:48.088529   25542 default_sa.go:45] found service account: "default"
	I0603 11:00:48.088547   25542 default_sa.go:55] duration metric: took 190.845833ms for default service account to be created ...
	I0603 11:00:48.088555   25542 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 11:00:48.282975   25542 request.go:629] Waited for 194.346284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:48.283061   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:48.283070   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.283089   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.283098   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.289555   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 11:00:48.297090   25542 system_pods.go:86] 24 kube-system pods found
	I0603 11:00:48.297125   25542 system_pods.go:89] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 11:00:48.297133   25542 system_pods.go:89] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 11:00:48.297139   25542 system_pods.go:89] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 11:00:48.297145   25542 system_pods.go:89] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 11:00:48.297151   25542 system_pods.go:89] "etcd-ha-683480-m03" [b508988f-4dad-4a28-89b7-b6c38e27626f] Running
	I0603 11:00:48.297157   25542 system_pods.go:89] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 11:00:48.297163   25542 system_pods.go:89] "kindnet-zsfhr" [ecb7fc1b-cc53-4b58-8e55-9269608f217f] Running
	I0603 11:00:48.297170   25542 system_pods.go:89] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 11:00:48.297180   25542 system_pods.go:89] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 11:00:48.297189   25542 system_pods.go:89] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 11:00:48.297199   25542 system_pods.go:89] "kube-apiserver-ha-683480-m03" [063e6cb5-7f5f-4fa0-a54d-dff4303574da] Running
	I0603 11:00:48.297210   25542 system_pods.go:89] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 11:00:48.297220   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 11:00:48.297229   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m03" [6819bdcb-5dd4-43c8-a9c7-d6970609be77] Running
	I0603 11:00:48.297236   25542 system_pods.go:89] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 11:00:48.297243   25542 system_pods.go:89] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 11:00:48.297253   25542 system_pods.go:89] "kube-proxy-txnhc" [f8fbdd89-d160-4342-94ca-9e049b0e96a8] Running
	I0603 11:00:48.297262   25542 system_pods.go:89] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 11:00:48.297272   25542 system_pods.go:89] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 11:00:48.297279   25542 system_pods.go:89] "kube-scheduler-ha-683480-m03" [be6a6382-a11b-425f-a0bf-551d1254d60a] Running
	I0603 11:00:48.297288   25542 system_pods.go:89] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 11:00:48.297294   25542 system_pods.go:89] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 11:00:48.297303   25542 system_pods.go:89] "kube-vip-ha-683480-m03" [b47cab7c-1c30-4828-a351-699fe4935533] Running
	I0603 11:00:48.297310   25542 system_pods.go:89] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 11:00:48.297321   25542 system_pods.go:126] duration metric: took 208.759907ms to wait for k8s-apps to be running ...
	I0603 11:00:48.297335   25542 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 11:00:48.297388   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:00:48.313772   25542 system_svc.go:56] duration metric: took 16.427175ms WaitForService to wait for kubelet
	I0603 11:00:48.313806   25542 kubeadm.go:576] duration metric: took 18.72665881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:00:48.313838   25542 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:00:48.483252   25542 request.go:629] Waited for 169.35007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0603 11:00:48.483313   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0603 11:00:48.483320   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.483329   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.483335   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.487622   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:48.488815   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488836   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488854   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488858   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488861   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488864   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488868   25542 node_conditions.go:105] duration metric: took 175.026386ms to run NodePressure ...
	I0603 11:00:48.488878   25542 start.go:240] waiting for startup goroutines ...
	I0603 11:00:48.488898   25542 start.go:254] writing updated cluster config ...
	I0603 11:00:48.489166   25542 ssh_runner.go:195] Run: rm -f paused
	I0603 11:00:48.541365   25542 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 11:00:48.543638   25542 out.go:177] * Done! kubectl is now configured to use "ha-683480" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.473134625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412655473110585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a582e523-7275-451d-9a03-577390105ac4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.473715057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf41e54b-6704-45ad-ae55-35ba1c9d1c4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.473784612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf41e54b-6704-45ad-ae55-35ba1c9d1c4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.474061280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf41e54b-6704-45ad-ae55-35ba1c9d1c4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.512763094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81e9228b-39bb-48f0-923b-b3fa65285a15 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.512923934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81e9228b-39bb-48f0-923b-b3fa65285a15 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.514072325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1df3091-e965-4f5d-99d1-4bb4b0810ed4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.514515950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412655514494362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1df3091-e965-4f5d-99d1-4bb4b0810ed4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.515073071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bd893a0-242a-46b3-9c8e-991659ffd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.515150455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bd893a0-242a-46b3-9c8e-991659ffd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.515403166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bd893a0-242a-46b3-9c8e-991659ffd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.557464947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbdcfff4-ec0f-4892-acd9-822f6f341715 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.557535852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbdcfff4-ec0f-4892-acd9-822f6f341715 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.558681690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ab539a4-7495-4e5e-9c0e-61161fd9c7fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.559277170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412655559246201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ab539a4-7495-4e5e-9c0e-61161fd9c7fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.559856661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a1a2b1-0ca7-4bd3-8739-b3731708ada7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.559911046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a1a2b1-0ca7-4bd3-8739-b3731708ada7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.560374407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a1a2b1-0ca7-4bd3-8739-b3731708ada7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.610933927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c48d36b4-3365-4a56-b399-82ef01639c8b name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.611091228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c48d36b4-3365-4a56-b399-82ef01639c8b name=/runtime.v1.RuntimeService/Version
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.612642005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac03bc1a-b8cf-4025-8b56-7ae1ff9c55a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.613367873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412655613335430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac03bc1a-b8cf-4025-8b56-7ae1ff9c55a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.614182520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc4dd950-0431-42b4-990e-c9f34f772379 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.614257375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc4dd950-0431-42b4-990e-c9f34f772379 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:04:15 ha-683480 crio[677]: time="2024-06-03 11:04:15.614664586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc4dd950-0431-42b4-990e-c9f34f772379 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	348419ceaffc3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d32d79da82b93       busybox-fc5497c4f-mvpcm
	b5e9b65b02107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b1b8dc9326249       storage-provisioner
	fdbecc258023e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   62bef471ea4a4       coredns-7db6d8ff4d-8tqf9
	aa5e3aca86502       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   41da25dac8c48       coredns-7db6d8ff4d-nff86
	995fa288cd916       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   e2f8a60370d3f       kindnet-zxhbp
	bcb102231e3a6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago       Running             kube-proxy                0                   6812552c2a4ab       kube-proxy-4d9w5
	2542929b8eaa1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   8990a20edbd36       kube-vip-ha-683480
	3e27550ee88e8       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   a55199d2713b2       kube-controller-manager-ha-683480
	c282307764128       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   860a510241592       kube-scheduler-ha-683480
	09fff5459f24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   86b1d4bcd541d       etcd-ha-683480
	200682c1dc43f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   117e05a9216ba       kube-apiserver-ha-683480
	
	
	==> coredns [aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8] <==
	[INFO] 10.244.0.4:50785 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001418789s
	[INFO] 10.244.2.2:45411 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001774399s
	[INFO] 10.244.1.2:53834 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003893614s
	[INFO] 10.244.1.2:48466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159838s
	[INFO] 10.244.1.2:57388 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158737s
	[INFO] 10.244.1.2:59258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009417s
	[INFO] 10.244.0.4:59067 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001995491s
	[INFO] 10.244.0.4:33658 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077694s
	[INFO] 10.244.2.2:56134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146189s
	[INFO] 10.244.2.2:42897 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001874015s
	[INFO] 10.244.2.2:49555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079926s
	[INFO] 10.244.1.2:49977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098794s
	[INFO] 10.244.1.2:55522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070995s
	[INFO] 10.244.1.2:47166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064061s
	[INFO] 10.244.0.4:52772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107779s
	[INFO] 10.244.0.4:34695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110706s
	[INFO] 10.244.2.2:47248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010537s
	[INFO] 10.244.1.2:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175618s
	[INFO] 10.244.1.2:56731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211211s
	[INFO] 10.244.1.2:47156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137189s
	[INFO] 10.244.1.2:57441 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161046s
	[INFO] 10.244.0.4:45937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064288s
	[INFO] 10.244.0.4:50125 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003887s
	[INFO] 10.244.2.2:38937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134308s
	[INFO] 10.244.2.2:34039 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085147s
	
	
	==> coredns [fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668] <==
	[INFO] 10.244.1.2:51172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127457s
	[INFO] 10.244.1.2:44058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217914s
	[INFO] 10.244.1.2:60397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013328418s
	[INFO] 10.244.1.2:34848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138348s
	[INFO] 10.244.0.4:53254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147619s
	[INFO] 10.244.0.4:37575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103362s
	[INFO] 10.244.0.4:54948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181862s
	[INFO] 10.244.0.4:39944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365258s
	[INFO] 10.244.0.4:55239 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017828s
	[INFO] 10.244.0.4:57467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097919s
	[INFO] 10.244.2.2:35971 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096406s
	[INFO] 10.244.2.2:38423 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334812s
	[INFO] 10.244.2.2:42352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153771s
	[INFO] 10.244.2.2:40734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099488s
	[INFO] 10.244.2.2:34598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136946s
	[INFO] 10.244.1.2:54219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087067s
	[INFO] 10.244.0.4:58452 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093948s
	[INFO] 10.244.0.4:35784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061499s
	[INFO] 10.244.2.2:54391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149082s
	[INFO] 10.244.2.2:39850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109311s
	[INFO] 10.244.2.2:39330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101321s
	[INFO] 10.244.0.4:56550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137331s
	[INFO] 10.244.0.4:42317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097716s
	[INFO] 10.244.2.2:34210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106975s
	[INFO] 10.244.2.2:40755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028708s
	
	
	==> describe nodes <==
	Name:               ha-683480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-683480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1505c2b59bc4afb8c36148f46c99e6c
	  System UUID:                f1505c2b-59bc-4afb-8c36-148f46c99e6c
	  Boot ID:                    acccd468-078d-403e-a5b4-d10d97594cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mvpcm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 coredns-7db6d8ff4d-8tqf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m2s
	  kube-system                 coredns-7db6d8ff4d-nff86             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m2s
	  kube-system                 etcd-ha-683480                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m15s
	  kube-system                 kindnet-zxhbp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m2s
	  kube-system                 kube-apiserver-ha-683480             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-controller-manager-ha-683480    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-proxy-4d9w5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-scheduler-ha-683480             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-vip-ha-683480                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m1s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m22s (x7 over 7m22s)  kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m22s (x8 over 7m22s)  kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s (x8 over 7m22s)  kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m15s                  kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s                  kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s                  kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal  NodeReady                6m57s                  kubelet          Node ha-683480 status is now: NodeReady
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal  RegisteredNode           3m32s                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	
	
	Name:               ha-683480-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:59:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:01:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-683480-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d1a1fca79484f629cf7b8fc1955281b
	  System UUID:                2d1a1fca-7948-4f62-9cf7-b8fc1955281b
	  Boot ID:                    9fed0fd2-3bb7-4f1f-92e4-0c4854a958bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldtcf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-683480-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-t6fxj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m4s
	  kube-system                 kube-apiserver-ha-683480-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-controller-manager-ha-683480-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-q2xfn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-scheduler-ha-683480-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-683480-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           3m32s                node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  NodeNotReady             98s                  node-controller  Node ha-683480-m02 status is now: NodeNotReady
	
	
	Name:               ha-683480-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_00_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:04:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    ha-683480-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bb33c5cad548f785d23d226c699411
	  System UUID:                b7bb33c5-cad5-48f7-85d2-3d226c699411
	  Boot ID:                    dafc5e08-866b-431b-bf46-a55811884d2b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ngf6n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-ha-683480-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-zsfhr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-ha-683480-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-ha-683480-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-txnhc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-ha-683480-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-vip-ha-683480-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node ha-683480-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal  RegisteredNode           3m33s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	
	
	Name:               ha-683480-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:01:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-683480-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0705544cf414e31abf26e0a013cd6bf
	  System UUID:                d0705544-cf41-4e31-abf2-6e0a013cd6bf
	  Boot ID:                    125ac719-6c97-4e76-9440-99e7f62b9e2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24p87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-2kkf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-683480-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-683480-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun 3 10:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051360] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039810] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.490599] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.327761] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.577151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.363785] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051848] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.189543] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.108878] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.262803] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077728] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +5.011635] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.054415] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.849379] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.148784] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[Jun 3 10:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057593] kauditd_printk_skb: 34 callbacks suppressed
	[Jun 3 10:59] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad] <==
	{"level":"warn","ts":"2024-06-03T11:04:15.605523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.70596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.805686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.915148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.919172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.933537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.942176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.950172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.953587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.956256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.966769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.97416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.981739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.985485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:15.988629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.001139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.005209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.010354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.017347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.021073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.024741Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.033195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.039443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.047295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:04:16.106072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:04:16 up 7 min,  0 users,  load average: 0.39, 0.22, 0.09
	Linux ha-683480 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b] <==
	I0603 11:03:39.274713       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:03:49.281397       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:03:49.281440       1 main.go:227] handling current node
	I0603 11:03:49.281450       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:03:49.281455       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:03:49.281592       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:03:49.281793       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:03:49.281904       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:03:49.281927       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:03:59.287409       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:03:59.287457       1 main.go:227] handling current node
	I0603 11:03:59.287489       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:03:59.287494       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:03:59.287601       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:03:59.287626       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:03:59.287676       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:03:59.287699       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:04:09.302942       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:04:09.303048       1 main.go:227] handling current node
	I0603 11:04:09.303059       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:04:09.303065       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:04:09.303314       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:04:09.303342       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:04:09.303403       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:04:09.303408       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79] <==
	W0603 10:56:58.845311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116]
	I0603 10:56:58.846085       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 10:56:58.849879       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 10:56:59.045916       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 10:57:00.166965       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 10:57:00.191680       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 10:57:00.212211       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 10:57:12.906632       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0603 10:57:13.254690       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0603 11:00:54.043235       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53504: use of closed network connection
	E0603 11:00:54.243975       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53510: use of closed network connection
	E0603 11:00:54.433209       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53534: use of closed network connection
	E0603 11:00:54.645283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53538: use of closed network connection
	E0603 11:00:54.829106       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53552: use of closed network connection
	E0603 11:00:55.009820       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53574: use of closed network connection
	E0603 11:00:55.193584       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53586: use of closed network connection
	E0603 11:00:55.367398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53604: use of closed network connection
	E0603 11:00:55.553665       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53628: use of closed network connection
	E0603 11:00:55.827887       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53642: use of closed network connection
	E0603 11:00:56.014267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53658: use of closed network connection
	E0603 11:00:56.195730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53666: use of closed network connection
	E0603 11:00:56.391371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53688: use of closed network connection
	E0603 11:00:56.573908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53710: use of closed network connection
	E0603 11:00:56.742677       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49688: use of closed network connection
	W0603 11:02:08.863808       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.131]
	
	
	==> kube-controller-manager [3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59] <==
	I0603 11:00:25.470071       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683480-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:00:27.549975       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m03"
	I0603 11:00:49.547774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.596903ms"
	I0603 11:00:49.666180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.082703ms"
	I0603 11:00:49.837962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.715406ms"
	I0603 11:00:49.892720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.455781ms"
	I0603 11:00:49.892826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.515µs"
	I0603 11:00:50.005424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.472497ms"
	I0603 11:00:50.005505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.77µs"
	I0603 11:00:50.116701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.234µs"
	I0603 11:00:51.839738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.972µs"
	I0603 11:00:53.105142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.466395ms"
	I0603 11:00:53.105291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.046µs"
	I0603 11:00:53.312277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.951093ms"
	I0603 11:00:53.312366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.995µs"
	I0603 11:00:53.591833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.110404ms"
	I0603 11:00:53.591950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.101µs"
	E0603 11:01:24.869229       1 certificate_controller.go:146] Sync csr-l4bzv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-l4bzv": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:01:25.167074       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-683480-m04\" does not exist"
	I0603 11:01:25.183314       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683480-m04" podCIDRs=["10.244.3.0/24"]
	I0603 11:01:27.580496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m04"
	I0603 11:01:35.199652       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683480-m04"
	I0603 11:02:37.626039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683480-m04"
	I0603 11:02:37.756078       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.010628ms"
	I0603 11:02:37.756369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="189.403µs"
	
	
	==> kube-proxy [bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f] <==
	I0603 10:57:14.219931       1 server_linux.go:69] "Using iptables proxy"
	I0603 10:57:14.234516       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	I0603 10:57:14.308348       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 10:57:14.308413       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 10:57:14.308429       1 server_linux.go:165] "Using iptables Proxier"
	I0603 10:57:14.321218       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 10:57:14.321484       1 server.go:872] "Version info" version="v1.30.1"
	I0603 10:57:14.322736       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 10:57:14.325109       1 config.go:192] "Starting service config controller"
	I0603 10:57:14.325148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 10:57:14.325187       1 config.go:101] "Starting endpoint slice config controller"
	I0603 10:57:14.325203       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 10:57:14.325717       1 config.go:319] "Starting node config controller"
	I0603 10:57:14.325767       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 10:57:14.425945       1 shared_informer.go:320] Caches are synced for node config
	I0603 10:57:14.426052       1 shared_informer.go:320] Caches are synced for service config
	I0603 10:57:14.426092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556] <==
	W0603 10:56:58.275182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 10:56:58.275297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 10:56:58.355336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 10:56:58.355431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 10:56:58.409399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 10:56:58.409428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 10:56:58.414878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 10:56:58.414916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 10:56:58.424918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 10:56:58.425079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 10:56:58.564204       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 10:56:58.564306       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 10:57:00.796334       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 11:00:49.536517       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mvpcm\": pod busybox-fc5497c4f-mvpcm is already assigned to node \"ha-683480\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mvpcm" node="ha-683480"
	E0603 11:00:49.542818       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fe7a8238-754b-43ce-8080-48e39c548383(default/busybox-fc5497c4f-mvpcm) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mvpcm"
	E0603 11:00:49.543611       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mvpcm\": pod busybox-fc5497c4f-mvpcm is already assigned to node \"ha-683480\"" pod="default/busybox-fc5497c4f-mvpcm"
	I0603 11:00:49.543859       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mvpcm" node="ha-683480"
	E0603 11:01:25.237779       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24p87\": pod kindnet-24p87 is already assigned to node \"ha-683480-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24p87" node="ha-683480-m04"
	E0603 11:01:25.238607       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dee8d19c-7e34-45b9-b5f4-88e8e8cb92e9(kube-system/kindnet-24p87) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24p87"
	E0603 11:01:25.241383       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24p87\": pod kindnet-24p87 is already assigned to node \"ha-683480-m04\"" pod="kube-system/kindnet-24p87"
	I0603 11:01:25.241448       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24p87" node="ha-683480-m04"
	E0603 11:01:25.246543       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6xfsj\": pod kube-proxy-6xfsj is already assigned to node \"ha-683480-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6xfsj" node="ha-683480-m04"
	E0603 11:01:25.250352       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9eaf0689-1d2f-4ffd-b921-c682b1b47fd0(kube-system/kube-proxy-6xfsj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6xfsj"
	E0603 11:01:25.253261       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6xfsj\": pod kube-proxy-6xfsj is already assigned to node \"ha-683480-m04\"" pod="kube-system/kube-proxy-6xfsj"
	I0603 11:01:25.253639       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6xfsj" node="ha-683480-m04"
	
	
	==> kubelet <==
	Jun 03 11:00:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:00:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:00:49 ha-683480 kubelet[1378]: I0603 11:00:49.537195    1378 topology_manager.go:215] "Topology Admit Handler" podUID="fe7a8238-754b-43ce-8080-48e39c548383" podNamespace="default" podName="busybox-fc5497c4f-mvpcm"
	Jun 03 11:00:49 ha-683480 kubelet[1378]: I0603 11:00:49.570102    1378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cmm\" (UniqueName: \"kubernetes.io/projected/fe7a8238-754b-43ce-8080-48e39c548383-kube-api-access-68cmm\") pod \"busybox-fc5497c4f-mvpcm\" (UID: \"fe7a8238-754b-43ce-8080-48e39c548383\") " pod="default/busybox-fc5497c4f-mvpcm"
	Jun 03 11:00:55 ha-683480 kubelet[1378]: E0603 11:00:55.368165    1378 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37010->127.0.0.1:34243: write tcp 127.0.0.1:37010->127.0.0.1:34243: write: broken pipe
	Jun 03 11:01:00 ha-683480 kubelet[1378]: E0603 11:01:00.112213    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:01:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:02:00 ha-683480 kubelet[1378]: E0603 11:02:00.116393    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:02:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:03:00 ha-683480 kubelet[1378]: E0603 11:03:00.112081    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:03:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:04:00 ha-683480 kubelet[1378]: E0603 11:04:00.112706    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:04:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (3.197625286s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:20.632341   30585 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:20.632488   30585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:20.632500   30585 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:20.632505   30585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:20.632682   30585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:20.632840   30585 out.go:298] Setting JSON to false
	I0603 11:04:20.632862   30585 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:20.632962   30585 notify.go:220] Checking for updates...
	I0603 11:04:20.633197   30585 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:20.633209   30585 status.go:255] checking status of ha-683480 ...
	I0603 11:04:20.633586   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.633646   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.653406   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0603 11:04:20.653798   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.654410   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.654437   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.654749   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.654973   30585 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:20.656607   30585 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:20.656626   30585 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:20.656960   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.656994   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.671276   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0603 11:04:20.671654   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.672082   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.672110   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.672441   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.672644   30585 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:20.675571   30585 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:20.676017   30585 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:20.676052   30585 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:20.676207   30585 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:20.676487   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.676517   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.690829   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36069
	I0603 11:04:20.691207   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.691620   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.691642   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.691937   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.692109   30585 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:20.692289   30585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:20.692312   30585 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:20.694750   30585 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:20.695185   30585 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:20.695204   30585 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:20.695335   30585 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:20.695509   30585 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:20.695657   30585 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:20.695789   30585 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:20.779144   30585 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:20.785955   30585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:20.801294   30585 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:20.801327   30585 api_server.go:166] Checking apiserver status ...
	I0603 11:04:20.801366   30585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:20.815701   30585 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:20.825914   30585 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:20.825964   30585 ssh_runner.go:195] Run: ls
	I0603 11:04:20.830405   30585 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:20.837434   30585 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:20.837459   30585 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:20.837472   30585 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:20.837486   30585 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:20.837848   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.837893   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.852580   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0603 11:04:20.853033   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.853544   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.853565   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.853914   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.854096   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:20.855526   30585 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:04:20.855540   30585 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:20.855802   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.855831   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.870117   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0603 11:04:20.870568   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.871054   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.871077   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.871370   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.871552   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:04:20.874218   30585 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:20.874672   30585 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:20.874700   30585 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:20.874849   30585 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:20.875184   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:20.875218   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:20.891614   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39775
	I0603 11:04:20.892023   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:20.892526   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:20.892553   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:20.892845   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:20.893040   30585 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:04:20.893227   30585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:20.893251   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:04:20.895851   30585 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:20.896254   30585 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:20.896280   30585 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:20.896452   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:04:20.896600   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:04:20.896771   30585 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:04:20.896906   30585 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:23.439410   30585 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:23.439517   30585 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:23.439566   30585 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:23.439577   30585 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:23.439603   30585 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:23.439614   30585 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:23.440091   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.440146   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.456428   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0603 11:04:23.456897   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.457476   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.457502   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.457822   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.457988   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:23.459557   30585 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:23.459571   30585 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:23.459836   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.459865   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.474939   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0603 11:04:23.475298   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.475739   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.475758   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.476032   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.476238   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:23.478750   30585 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:23.479189   30585 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:23.479215   30585 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:23.479398   30585 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:23.479688   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.479719   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.494429   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0603 11:04:23.494807   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.495204   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.495222   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.495596   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.495779   30585 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:23.495956   30585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:23.495976   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:23.498637   30585 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:23.499066   30585 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:23.499091   30585 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:23.499219   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:23.499393   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:23.499501   30585 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:23.499623   30585 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:23.579181   30585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:23.597155   30585 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:23.597191   30585 api_server.go:166] Checking apiserver status ...
	I0603 11:04:23.597231   30585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:23.610580   30585 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:23.623251   30585 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:23.623307   30585 ssh_runner.go:195] Run: ls
	I0603 11:04:23.630581   30585 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:23.637103   30585 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:23.637130   30585 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:23.637138   30585 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:23.637153   30585 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:23.637458   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.637492   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.653054   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33715
	I0603 11:04:23.653452   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.653895   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.653917   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.654246   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.654440   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:23.655968   30585 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:23.655986   30585 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:23.656254   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.656292   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.670158   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0603 11:04:23.670539   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.670933   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.670964   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.671268   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.671429   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:23.673896   30585 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:23.674382   30585 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:23.674409   30585 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:23.674556   30585 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:23.674847   30585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:23.674876   30585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:23.688734   30585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0603 11:04:23.689123   30585 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:23.689519   30585 main.go:141] libmachine: Using API Version  1
	I0603 11:04:23.689539   30585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:23.689849   30585 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:23.690031   30585 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:23.690215   30585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:23.690236   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:23.692641   30585 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:23.693073   30585 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:23.693091   30585 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:23.693216   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:23.693368   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:23.693478   30585 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:23.693582   30585 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:23.775737   30585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:23.789641   30585 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (5.195352378s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:24.780674   30670 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:24.780934   30670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:24.780944   30670 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:24.780948   30670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:24.781108   30670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:24.781273   30670 out.go:298] Setting JSON to false
	I0603 11:04:24.781293   30670 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:24.781411   30670 notify.go:220] Checking for updates...
	I0603 11:04:24.781658   30670 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:24.781681   30670 status.go:255] checking status of ha-683480 ...
	I0603 11:04:24.782070   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:24.782133   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:24.798718   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0603 11:04:24.799158   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:24.799767   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:24.799785   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:24.800140   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:24.800320   30670 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:24.802051   30670 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:24.802065   30670 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:24.802394   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:24.802426   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:24.817972   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0603 11:04:24.818287   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:24.818706   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:24.818727   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:24.818970   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:24.819181   30670 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:24.822006   30670 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:24.822461   30670 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:24.822479   30670 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:24.822621   30670 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:24.822989   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:24.823025   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:24.837781   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0603 11:04:24.838169   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:24.838577   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:24.838596   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:24.838904   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:24.839062   30670 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:24.839298   30670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:24.839317   30670 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:24.841834   30670 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:24.842232   30670 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:24.842261   30670 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:24.842425   30670 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:24.842626   30670 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:24.842776   30670 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:24.842914   30670 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:24.932645   30670 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:24.939820   30670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:24.959201   30670 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:24.959243   30670 api_server.go:166] Checking apiserver status ...
	I0603 11:04:24.959287   30670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:24.980287   30670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:24.992446   30670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:24.992486   30670 ssh_runner.go:195] Run: ls
	I0603 11:04:24.997237   30670 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:25.001363   30670 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:25.001382   30670 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:25.001395   30670 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:25.001414   30670 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:25.001796   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:25.001834   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:25.017035   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I0603 11:04:25.017470   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:25.017905   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:25.017929   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:25.018273   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:25.018464   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:25.020058   30670 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:04:25.020075   30670 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:25.020363   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:25.020394   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:25.034655   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0603 11:04:25.035004   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:25.035489   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:25.035510   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:25.035817   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:25.035999   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:04:25.038738   30670 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:25.039189   30670 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:25.039227   30670 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:25.039357   30670 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:25.039703   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:25.039738   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:25.053871   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0603 11:04:25.054270   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:25.054686   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:25.054709   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:25.054955   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:25.055173   30670 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:04:25.055373   30670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:25.055392   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:04:25.058018   30670 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:25.058471   30670 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:25.058519   30670 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:25.058612   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:04:25.058797   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:04:25.059005   30670 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:04:25.059175   30670 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:26.515328   30670 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:26.515405   30670 retry.go:31] will retry after 258.561404ms: dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:29.583378   30670 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:29.583473   30670 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:29.583495   30670 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:29.583505   30670 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:29.583531   30670 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:29.583544   30670 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:29.583872   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.583931   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.598711   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0603 11:04:29.599193   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.599710   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.599735   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.600103   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.600314   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:29.601950   30670 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:29.601965   30670 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:29.602249   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.602279   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.616766   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0603 11:04:29.617364   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.617871   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.617907   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.618296   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.618532   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:29.621890   30670 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:29.622409   30670 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:29.622457   30670 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:29.622718   30670 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:29.623122   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.623167   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.638594   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37419
	I0603 11:04:29.639093   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.639533   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.639553   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.639826   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.640024   30670 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:29.640221   30670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:29.640243   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:29.643673   30670 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:29.644087   30670 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:29.644118   30670 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:29.644370   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:29.644540   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:29.644701   30670 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:29.644837   30670 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:29.727634   30670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:29.742943   30670 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:29.742964   30670 api_server.go:166] Checking apiserver status ...
	I0603 11:04:29.742991   30670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:29.758137   30670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:29.768428   30670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:29.768483   30670 ssh_runner.go:195] Run: ls
	I0603 11:04:29.773154   30670 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:29.777667   30670 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:29.777686   30670 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:29.777694   30670 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:29.777708   30670 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:29.777979   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.778008   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.792802   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0603 11:04:29.793194   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.793626   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.793645   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.793920   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.794089   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:29.795619   30670 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:29.795634   30670 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:29.795895   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.795926   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.812563   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0603 11:04:29.812995   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.813496   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.813520   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.813854   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.814144   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:29.817220   30670 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:29.817633   30670 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:29.817661   30670 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:29.817800   30670 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:29.818077   30670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:29.818116   30670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:29.833922   30670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0603 11:04:29.834362   30670 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:29.834833   30670 main.go:141] libmachine: Using API Version  1
	I0603 11:04:29.834855   30670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:29.835189   30670 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:29.835366   30670 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:29.835540   30670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:29.835557   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:29.838455   30670 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:29.838940   30670 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:29.838976   30670 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:29.839183   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:29.839365   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:29.839556   30670 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:29.839720   30670 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:29.918696   30670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:29.935019   30670 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (4.717685531s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:31.524999   30786 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:31.525222   30786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:31.525232   30786 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:31.525236   30786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:31.525399   30786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:31.525543   30786 out.go:298] Setting JSON to false
	I0603 11:04:31.525563   30786 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:31.525597   30786 notify.go:220] Checking for updates...
	I0603 11:04:31.525918   30786 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:31.525931   30786 status.go:255] checking status of ha-683480 ...
	I0603 11:04:31.526291   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.526368   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.545578   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0603 11:04:31.546051   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.546746   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.546808   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.547222   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.547444   30786 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:31.549282   30786 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:31.549301   30786 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:31.549701   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.549748   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.564672   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0603 11:04:31.565049   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.565444   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.565464   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.565780   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.565988   30786 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:31.568475   30786 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:31.568928   30786 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:31.568959   30786 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:31.569093   30786 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:31.569434   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.569465   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.583491   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0603 11:04:31.583855   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.584335   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.584367   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.584677   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.584856   30786 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:31.585038   30786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:31.585064   30786 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:31.587895   30786 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:31.588255   30786 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:31.588297   30786 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:31.588405   30786 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:31.588580   30786 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:31.588745   30786 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:31.588891   30786 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:31.666895   30786 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:31.673294   30786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:31.688606   30786 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:31.688646   30786 api_server.go:166] Checking apiserver status ...
	I0603 11:04:31.688693   30786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:31.701786   30786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:31.714454   30786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:31.714492   30786 ssh_runner.go:195] Run: ls
	I0603 11:04:31.719454   30786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:31.723976   30786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:31.723996   30786 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:31.724010   30786 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:31.724032   30786 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:31.724445   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.724486   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.740356   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0603 11:04:31.740724   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.741141   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.741161   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.741497   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.741721   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:31.743262   30786 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:04:31.743275   30786 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:31.743531   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.743560   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.757477   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0603 11:04:31.757831   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.758298   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.758317   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.758641   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.758819   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:04:31.761553   30786 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:31.761896   30786 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:31.761923   30786 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:31.762028   30786 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:31.762358   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:31.762396   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:31.778516   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0603 11:04:31.778893   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:31.779395   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:31.779415   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:31.779719   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:31.779905   30786 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:04:31.780085   30786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:31.780108   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:04:31.782662   30786 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:31.783141   30786 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:31.783168   30786 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:31.783293   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:04:31.783428   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:04:31.783589   30786 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:04:31.783717   30786 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:32.655314   30786 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:32.655363   30786 retry.go:31] will retry after 134.039923ms: dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:35.855296   30786 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:35.855454   30786 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:35.855507   30786 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:35.855518   30786 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:35.855543   30786 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:35.855553   30786 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:35.855900   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:35.855940   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:35.871310   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39391
	I0603 11:04:35.871731   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:35.872165   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:35.872185   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:35.872502   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:35.872721   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:35.874431   30786 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:35.874454   30786 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:35.874835   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:35.874873   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:35.889752   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0603 11:04:35.890209   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:35.890679   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:35.890698   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:35.891016   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:35.891251   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:35.894250   30786 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:35.894697   30786 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:35.894724   30786 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:35.894876   30786 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:35.895242   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:35.895283   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:35.910926   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I0603 11:04:35.911318   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:35.911768   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:35.911788   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:35.912042   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:35.912223   30786 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:35.912395   30786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:35.912417   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:35.915428   30786 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:35.915882   30786 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:35.915906   30786 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:35.916020   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:35.916187   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:35.916382   30786 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:35.916535   30786 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:35.998566   30786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:36.012777   30786 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:36.012809   30786 api_server.go:166] Checking apiserver status ...
	I0603 11:04:36.012845   30786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:36.026659   30786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:36.036184   30786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:36.036255   30786 ssh_runner.go:195] Run: ls
	I0603 11:04:36.040851   30786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:36.045385   30786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:36.045412   30786 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:36.045422   30786 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:36.045437   30786 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:36.045808   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:36.045850   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:36.060718   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0603 11:04:36.061131   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:36.061596   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:36.061616   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:36.061923   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:36.062100   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:36.063608   30786 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:36.063621   30786 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:36.063945   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:36.063980   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:36.078996   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35495
	I0603 11:04:36.079463   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:36.079954   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:36.079978   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:36.080350   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:36.080563   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:36.082870   30786 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:36.083303   30786 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:36.083346   30786 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:36.083497   30786 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:36.083883   30786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:36.083923   30786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:36.098345   30786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0603 11:04:36.098761   30786 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:36.099227   30786 main.go:141] libmachine: Using API Version  1
	I0603 11:04:36.099253   30786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:36.099613   30786 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:36.099777   30786 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:36.099980   30786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:36.100005   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:36.102771   30786 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:36.103221   30786 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:36.103247   30786 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:36.103409   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:36.103577   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:36.103727   30786 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:36.103874   30786 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:36.186260   30786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:36.200175   30786 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (3.718210569s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:38.969417   30886 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:38.969690   30886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:38.969699   30886 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:38.969705   30886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:38.969904   30886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:38.970086   30886 out.go:298] Setting JSON to false
	I0603 11:04:38.970112   30886 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:38.970239   30886 notify.go:220] Checking for updates...
	I0603 11:04:38.970522   30886 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:38.970538   30886 status.go:255] checking status of ha-683480 ...
	I0603 11:04:38.970981   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:38.971069   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:38.988970   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0603 11:04:38.989375   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:38.989957   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:38.990000   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:38.990380   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:38.990584   30886 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:38.992016   30886 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:38.992032   30886 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:38.992305   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:38.992341   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:39.007129   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0603 11:04:39.007539   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:39.008002   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:39.008030   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:39.008308   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:39.008512   30886 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:39.011095   30886 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:39.011527   30886 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:39.011548   30886 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:39.011687   30886 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:39.012054   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:39.012096   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:39.026425   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0603 11:04:39.026753   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:39.027276   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:39.027297   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:39.027602   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:39.027785   30886 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:39.027985   30886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:39.028012   30886 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:39.030923   30886 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:39.031376   30886 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:39.031400   30886 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:39.031555   30886 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:39.031718   30886 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:39.031850   30886 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:39.031958   30886 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:39.111011   30886 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:39.117503   30886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:39.134131   30886 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:39.134160   30886 api_server.go:166] Checking apiserver status ...
	I0603 11:04:39.134197   30886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:39.148798   30886 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:39.158240   30886 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:39.158288   30886 ssh_runner.go:195] Run: ls
	I0603 11:04:39.162744   30886 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:39.169178   30886 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:39.169197   30886 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:39.169207   30886 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:39.169230   30886 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:39.169570   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:39.169605   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:39.184214   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0603 11:04:39.184581   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:39.184968   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:39.184988   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:39.185332   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:39.185501   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:39.186955   30886 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:04:39.186987   30886 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:39.187287   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:39.187321   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:39.202176   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36613
	I0603 11:04:39.202496   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:39.202907   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:39.202927   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:39.203237   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:39.203424   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:04:39.205885   30886 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:39.206254   30886 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:39.206279   30886 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:39.206380   30886 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:39.206655   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:39.206684   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:39.220675   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0603 11:04:39.221161   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:39.221638   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:39.221657   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:39.222006   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:39.222373   30886 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:04:39.222608   30886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:39.222635   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:04:39.224920   30886 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:39.225226   30886 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:39.225254   30886 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:39.225460   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:04:39.225611   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:04:39.225772   30886 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:04:39.225875   30886 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:42.291314   30886 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:42.291419   30886 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:42.291434   30886 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:42.291443   30886 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:42.291464   30886 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:42.291471   30886 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:42.291865   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.291920   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.306559   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0603 11:04:42.307057   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.307504   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.307527   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.307895   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.308058   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:42.309311   30886 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:42.309328   30886 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:42.309601   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.309631   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.323596   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46361
	I0603 11:04:42.323997   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.324475   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.324498   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.324792   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.324993   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:42.327890   30886 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:42.328375   30886 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:42.328401   30886 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:42.328492   30886 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:42.328788   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.328822   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.343920   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I0603 11:04:42.344295   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.344731   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.344750   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.345032   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.345205   30886 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:42.345391   30886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:42.345410   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:42.347982   30886 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:42.348356   30886 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:42.348388   30886 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:42.348479   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:42.348612   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:42.348719   30886 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:42.348848   30886 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:42.435869   30886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:42.450753   30886 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:42.450779   30886 api_server.go:166] Checking apiserver status ...
	I0603 11:04:42.450816   30886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:42.466830   30886 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:42.476150   30886 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:42.476202   30886 ssh_runner.go:195] Run: ls
	I0603 11:04:42.480814   30886 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:42.487104   30886 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:42.487131   30886 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:42.487153   30886 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:42.487178   30886 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:42.487482   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.487515   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.502637   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33109
	I0603 11:04:42.503027   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.503453   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.503476   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.503831   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.504072   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:42.505793   30886 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:42.505806   30886 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:42.506119   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.506156   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.520498   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0603 11:04:42.520887   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.521428   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.521447   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.521787   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.521984   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:42.524662   30886 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:42.525142   30886 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:42.525171   30886 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:42.525294   30886 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:42.525592   30886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:42.525652   30886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:42.540386   30886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0603 11:04:42.540735   30886 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:42.541171   30886 main.go:141] libmachine: Using API Version  1
	I0603 11:04:42.541190   30886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:42.541453   30886 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:42.541642   30886 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:42.541840   30886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:42.541857   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:42.544563   30886 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:42.544970   30886 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:42.545000   30886 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:42.545119   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:42.545322   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:42.545499   30886 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:42.545626   30886 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:42.630731   30886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:42.645365   30886 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 3 (3.695199051s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:47.359594   31002 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:47.359848   31002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:47.359858   31002 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:47.359862   31002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:47.360028   31002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:47.360197   31002 out.go:298] Setting JSON to false
	I0603 11:04:47.360225   31002 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:47.360279   31002 notify.go:220] Checking for updates...
	I0603 11:04:47.360617   31002 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:47.360632   31002 status.go:255] checking status of ha-683480 ...
	I0603 11:04:47.360994   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.361050   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.381272   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
	I0603 11:04:47.381653   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.382282   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.382306   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.382671   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.382896   31002 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:47.384507   31002 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:47.384523   31002 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:47.384890   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.384924   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.399995   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0603 11:04:47.400332   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.400786   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.400817   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.401131   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.401354   31002 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:47.404065   31002 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:47.404534   31002 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:47.404572   31002 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:47.404666   31002 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:47.404942   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.404982   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.419010   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34431
	I0603 11:04:47.419456   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.419866   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.419880   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.420218   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.420434   31002 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:47.420619   31002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:47.420650   31002 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:47.423401   31002 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:47.423807   31002 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:47.423839   31002 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:47.423959   31002 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:47.424126   31002 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:47.424285   31002 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:47.424456   31002 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:47.503222   31002 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:47.509600   31002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:47.525597   31002 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:47.525632   31002 api_server.go:166] Checking apiserver status ...
	I0603 11:04:47.525670   31002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:47.539798   31002 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:47.549188   31002 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:47.549239   31002 ssh_runner.go:195] Run: ls
	I0603 11:04:47.553870   31002 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:47.558432   31002 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:47.558457   31002 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:47.558468   31002 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:47.558483   31002 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:47.558822   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.558856   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.573966   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I0603 11:04:47.574346   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.574805   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.574823   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.575195   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.575388   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:47.576892   31002 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:04:47.576910   31002 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:47.577295   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.577344   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.591922   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0603 11:04:47.592249   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.592645   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.592664   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.592934   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.593118   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:04:47.595534   31002 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:47.595928   31002 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:47.595954   31002 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:47.596061   31002 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:04:47.596376   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:47.596410   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:47.610352   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0603 11:04:47.610775   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:47.611272   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:47.611294   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:47.611723   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:47.611916   31002 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:04:47.612102   31002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:47.612122   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:04:47.614745   31002 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:47.615202   31002 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:04:47.615237   31002 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:04:47.615391   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:04:47.615565   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:04:47.615735   31002 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:04:47.615845   31002 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:04:50.671291   31002 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:04:50.671369   31002 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:04:50.671385   31002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:50.671393   31002 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:04:50.671410   31002 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:04:50.671419   31002 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:50.671719   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.671756   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.686806   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0603 11:04:50.687244   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.687717   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.687736   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.688019   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.688187   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:50.689663   31002 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:50.689682   31002 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:50.689960   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.689990   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.705660   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0603 11:04:50.706064   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.706483   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.706501   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.706819   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.706981   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:50.709514   31002 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:50.709907   31002 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:50.709934   31002 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:50.710068   31002 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:50.710408   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.710440   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.725235   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0603 11:04:50.725668   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.726098   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.726118   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.726413   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.726581   31002 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:50.726766   31002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:50.726785   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:50.729370   31002 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:50.729738   31002 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:50.729763   31002 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:50.729886   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:50.730040   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:50.730187   31002 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:50.730320   31002 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:50.811535   31002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:50.828433   31002 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:50.828459   31002 api_server.go:166] Checking apiserver status ...
	I0603 11:04:50.828489   31002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:50.842021   31002 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:50.851552   31002 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:50.851600   31002 ssh_runner.go:195] Run: ls
	I0603 11:04:50.855958   31002 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:50.861772   31002 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:50.861793   31002 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:50.861802   31002 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:50.861816   31002 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:50.862168   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.862202   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.877514   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0603 11:04:50.877879   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.878396   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.878417   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.878719   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.878898   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:50.880417   31002 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:50.880433   31002 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:50.880748   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.880787   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.895489   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0603 11:04:50.895836   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.896379   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.896405   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.896724   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.896909   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:50.899624   31002 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:50.899981   31002 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:50.900017   31002 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:50.900174   31002 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:50.900464   31002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:50.900517   31002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:50.915173   31002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0603 11:04:50.915565   31002 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:50.915994   31002 main.go:141] libmachine: Using API Version  1
	I0603 11:04:50.916015   31002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:50.916352   31002 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:50.916517   31002 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:50.916652   31002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:50.916665   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:50.919541   31002 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:50.919961   31002 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:50.920009   31002 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:50.920240   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:50.920476   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:50.920640   31002 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:50.920813   31002 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:50.998324   31002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:51.012876   31002 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 7 (598.641824ms)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:04:58.087455   31137 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:04:58.087709   31137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:58.087718   31137 out.go:304] Setting ErrFile to fd 2...
	I0603 11:04:58.087722   31137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:04:58.087880   31137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:04:58.088043   31137 out.go:298] Setting JSON to false
	I0603 11:04:58.088068   31137 mustload.go:65] Loading cluster: ha-683480
	I0603 11:04:58.088165   31137 notify.go:220] Checking for updates...
	I0603 11:04:58.089346   31137 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:04:58.089526   31137 status.go:255] checking status of ha-683480 ...
	I0603 11:04:58.090010   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.090054   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.109905   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0603 11:04:58.110502   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.111197   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.111233   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.111633   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.111831   31137 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:04:58.113695   31137 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:04:58.113712   31137 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:58.114052   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.114086   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.129266   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0603 11:04:58.129648   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.130078   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.130102   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.130413   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.130608   31137 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:04:58.133614   31137 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:58.134049   31137 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:58.134089   31137 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:58.134203   31137 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:04:58.134495   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.134531   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.149058   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
	I0603 11:04:58.149433   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.149883   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.149901   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.150271   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.150490   31137 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:04:58.150714   31137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:58.150737   31137 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:04:58.153275   31137 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:58.153654   31137 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:04:58.153688   31137 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:04:58.153821   31137 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:04:58.153983   31137 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:04:58.154105   31137 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:04:58.154215   31137 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:04:58.235006   31137 ssh_runner.go:195] Run: systemctl --version
	I0603 11:04:58.241857   31137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:58.257123   31137 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:58.257149   31137 api_server.go:166] Checking apiserver status ...
	I0603 11:04:58.257183   31137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:58.270213   31137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:04:58.279339   31137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:58.279372   31137 ssh_runner.go:195] Run: ls
	I0603 11:04:58.283796   31137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:58.287993   31137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:58.288020   31137 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:04:58.288033   31137 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:58.288057   31137 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:04:58.288434   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.288472   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.303242   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0603 11:04:58.303572   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.304019   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.304042   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.304399   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.304650   31137 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:04:58.306287   31137 status.go:330] ha-683480-m02 host status = "Stopped" (err=<nil>)
	I0603 11:04:58.306302   31137 status.go:343] host is not running, skipping remaining checks
	I0603 11:04:58.306309   31137 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:58.306328   31137 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:04:58.306610   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.306648   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.321145   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0603 11:04:58.321571   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.322046   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.322067   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.322323   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.322513   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:04:58.323915   31137 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:04:58.323932   31137 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:58.324264   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.324297   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.338742   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0603 11:04:58.339135   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.339547   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.339568   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.339851   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.340037   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:04:58.342707   31137 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:58.343171   31137 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:58.343197   31137 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:58.343354   31137 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:04:58.343720   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.343758   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.357685   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0603 11:04:58.358063   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.358467   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.358491   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.358816   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.358997   31137 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:04:58.359186   31137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:58.359216   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:04:58.361800   31137 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:58.362213   31137 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:04:58.362234   31137 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:04:58.362360   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:04:58.362527   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:04:58.362636   31137 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:04:58.362760   31137 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:04:58.444927   31137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:58.459857   31137 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:04:58.459888   31137 api_server.go:166] Checking apiserver status ...
	I0603 11:04:58.459928   31137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:04:58.474492   31137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:04:58.484057   31137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:04:58.484101   31137 ssh_runner.go:195] Run: ls
	I0603 11:04:58.488516   31137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:04:58.492801   31137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:04:58.492819   31137 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:04:58.492827   31137 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:04:58.492841   31137 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:04:58.493128   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.493156   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.509031   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0603 11:04:58.509427   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.509869   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.509896   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.510213   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.510411   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:04:58.511922   31137 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:04:58.511936   31137 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:58.512293   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.512336   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.527533   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0603 11:04:58.527932   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.528507   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.528537   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.528855   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.529045   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:04:58.531779   31137 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:58.532235   31137 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:58.532269   31137 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:58.532400   31137 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:04:58.532670   31137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:04:58.532705   31137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:04:58.546773   31137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36709
	I0603 11:04:58.547133   31137 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:04:58.547564   31137 main.go:141] libmachine: Using API Version  1
	I0603 11:04:58.547589   31137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:04:58.547904   31137 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:04:58.548085   31137 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:04:58.548270   31137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:04:58.548294   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:04:58.550914   31137 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:58.551328   31137 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:04:58.551354   31137 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:04:58.551472   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:04:58.551623   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:04:58.551749   31137 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:04:58.551880   31137 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:04:58.630787   31137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:04:58.644841   31137 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 7 (597.559679ms)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:05:07.842353   31242 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:05:07.842580   31242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:07.842588   31242 out.go:304] Setting ErrFile to fd 2...
	I0603 11:05:07.842592   31242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:07.842783   31242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:05:07.842938   31242 out.go:298] Setting JSON to false
	I0603 11:05:07.842959   31242 mustload.go:65] Loading cluster: ha-683480
	I0603 11:05:07.842991   31242 notify.go:220] Checking for updates...
	I0603 11:05:07.843306   31242 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:05:07.843319   31242 status.go:255] checking status of ha-683480 ...
	I0603 11:05:07.843701   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:07.843748   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:07.862336   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0603 11:05:07.862787   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:07.863347   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:07.863403   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:07.863778   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:07.863988   31242 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:05:07.865391   31242 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:05:07.865407   31242 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:05:07.865686   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:07.865721   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:07.880194   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0603 11:05:07.880638   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:07.881124   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:07.881157   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:07.881491   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:07.881679   31242 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:05:07.884426   31242 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:07.884820   31242 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:05:07.884845   31242 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:07.885017   31242 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:05:07.885305   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:07.885343   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:07.899776   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0603 11:05:07.900173   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:07.900623   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:07.900642   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:07.900946   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:07.901103   31242 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:05:07.901288   31242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:07.901310   31242 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:05:07.903882   31242 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:07.904250   31242 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:05:07.904290   31242 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:07.904420   31242 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:05:07.904582   31242 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:05:07.904704   31242 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:05:07.904823   31242 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:05:07.987422   31242 ssh_runner.go:195] Run: systemctl --version
	I0603 11:05:07.993842   31242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:08.009085   31242 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:05:08.009122   31242 api_server.go:166] Checking apiserver status ...
	I0603 11:05:08.009159   31242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:05:08.025424   31242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:05:08.036001   31242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:05:08.036075   31242 ssh_runner.go:195] Run: ls
	I0603 11:05:08.040728   31242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:05:08.045049   31242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:05:08.045074   31242 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:05:08.045086   31242 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:08.045106   31242 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:05:08.045420   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.045460   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.060157   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0603 11:05:08.060636   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.061211   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.061234   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.061544   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.061722   31242 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:05:08.063207   31242 status.go:330] ha-683480-m02 host status = "Stopped" (err=<nil>)
	I0603 11:05:08.063222   31242 status.go:343] host is not running, skipping remaining checks
	I0603 11:05:08.063230   31242 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:08.063250   31242 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:05:08.063668   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.063709   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.078006   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0603 11:05:08.078516   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.079120   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.079148   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.079497   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.079694   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:05:08.081303   31242 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:05:08.081318   31242 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:05:08.081615   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.081650   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.096948   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0603 11:05:08.097340   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.097809   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.097828   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.098081   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.098249   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:05:08.100742   31242 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:08.101203   31242 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:05:08.101227   31242 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:08.101357   31242 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:05:08.101642   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.101681   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.116246   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0603 11:05:08.116741   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.117233   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.117254   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.117565   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.117740   31242 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:05:08.117914   31242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:08.117946   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:05:08.120275   31242 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:08.120591   31242 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:05:08.120619   31242 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:08.120693   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:05:08.120873   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:05:08.121065   31242 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:05:08.121233   31242 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:05:08.198369   31242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:08.214105   31242 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:05:08.214132   31242 api_server.go:166] Checking apiserver status ...
	I0603 11:05:08.214168   31242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:05:08.228747   31242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:05:08.238421   31242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:05:08.238479   31242 ssh_runner.go:195] Run: ls
	I0603 11:05:08.243173   31242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:05:08.247473   31242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:05:08.247491   31242 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:05:08.247499   31242 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:08.247511   31242 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:05:08.247799   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.247830   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.262531   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0603 11:05:08.262913   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.263491   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.263515   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.263819   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.264011   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:05:08.265522   31242 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:05:08.265538   31242 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:05:08.265920   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.265960   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.280082   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0603 11:05:08.280410   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.280869   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.280889   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.281194   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.281390   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:05:08.284201   31242 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:08.284613   31242 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:05:08.284639   31242 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:08.284790   31242 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:05:08.285182   31242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:08.285226   31242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:08.299498   31242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
	I0603 11:05:08.299942   31242 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:08.300480   31242 main.go:141] libmachine: Using API Version  1
	I0603 11:05:08.300499   31242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:08.300826   31242 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:08.300988   31242 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:05:08.301155   31242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:08.301173   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:05:08.304060   31242 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:08.304437   31242 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:05:08.304472   31242 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:08.304614   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:05:08.304805   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:05:08.304966   31242 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:05:08.305100   31242 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:05:08.382620   31242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:08.397766   31242 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0603 11:05:19.215407   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 7 (599.658456ms)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:05:20.186711   31362 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:05:20.186853   31362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:20.186863   31362 out.go:304] Setting ErrFile to fd 2...
	I0603 11:05:20.186870   31362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:20.187165   31362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:05:20.187396   31362 out.go:298] Setting JSON to false
	I0603 11:05:20.187424   31362 mustload.go:65] Loading cluster: ha-683480
	I0603 11:05:20.187457   31362 notify.go:220] Checking for updates...
	I0603 11:05:20.187854   31362 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:05:20.187872   31362 status.go:255] checking status of ha-683480 ...
	I0603 11:05:20.188419   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.188467   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.207462   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0603 11:05:20.207860   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.208452   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.208472   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.208904   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.209124   31362 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:05:20.210843   31362 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:05:20.210867   31362 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:05:20.211256   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.211316   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.226475   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I0603 11:05:20.226876   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.227363   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.227384   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.227668   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.227844   31362 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:05:20.230322   31362 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:20.230757   31362 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:05:20.230796   31362 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:20.230894   31362 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:05:20.231199   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.231230   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.245487   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42473
	I0603 11:05:20.245857   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.246256   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.246275   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.246575   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.246719   31362 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:05:20.246887   31362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:20.246911   31362 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:05:20.249599   31362 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:20.250103   31362 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:05:20.250122   31362 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:05:20.250270   31362 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:05:20.250432   31362 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:05:20.250588   31362 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:05:20.250754   31362 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:05:20.331502   31362 ssh_runner.go:195] Run: systemctl --version
	I0603 11:05:20.338060   31362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:20.353063   31362 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:05:20.353103   31362 api_server.go:166] Checking apiserver status ...
	I0603 11:05:20.353150   31362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:05:20.368866   31362 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0603 11:05:20.379845   31362 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:05:20.379889   31362 ssh_runner.go:195] Run: ls
	I0603 11:05:20.384601   31362 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:05:20.390633   31362 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:05:20.390656   31362 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:05:20.390666   31362 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:20.390681   31362 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:05:20.390977   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.391019   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.405368   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0603 11:05:20.405751   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.406216   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.406238   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.406619   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.406891   31362 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:05:20.408488   31362 status.go:330] ha-683480-m02 host status = "Stopped" (err=<nil>)
	I0603 11:05:20.408518   31362 status.go:343] host is not running, skipping remaining checks
	I0603 11:05:20.408527   31362 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:20.408541   31362 status.go:255] checking status of ha-683480-m03 ...
	I0603 11:05:20.408824   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.408855   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.423073   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
	I0603 11:05:20.423486   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.423898   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.423916   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.424229   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.424406   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:05:20.425879   31362 status.go:330] ha-683480-m03 host status = "Running" (err=<nil>)
	I0603 11:05:20.425901   31362 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:05:20.426169   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.426216   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.440990   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0603 11:05:20.441306   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.441706   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.441724   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.441998   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.442170   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:05:20.444505   31362 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:20.444853   31362 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:05:20.444879   31362 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:20.445005   31362 host.go:66] Checking if "ha-683480-m03" exists ...
	I0603 11:05:20.445316   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.445356   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.459559   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0603 11:05:20.460017   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.460495   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.460524   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.460823   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.460996   31362 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:05:20.461176   31362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:20.461193   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:05:20.463954   31362 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:20.464420   31362 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:05:20.464453   31362 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:20.464603   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:05:20.464837   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:05:20.464974   31362 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:05:20.465098   31362 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:05:20.546873   31362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:20.561375   31362 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:05:20.561398   31362 api_server.go:166] Checking apiserver status ...
	I0603 11:05:20.561431   31362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:05:20.574446   31362 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0603 11:05:20.584165   31362 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:05:20.584214   31362 ssh_runner.go:195] Run: ls
	I0603 11:05:20.588711   31362 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:05:20.592780   31362 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:05:20.592799   31362 status.go:422] ha-683480-m03 apiserver status = Running (err=<nil>)
	I0603 11:05:20.592806   31362 status.go:257] ha-683480-m03 status: &{Name:ha-683480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:05:20.592819   31362 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:05:20.593090   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.593123   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.607449   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0603 11:05:20.607848   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.608284   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.608304   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.608597   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.608776   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:05:20.610317   31362 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:05:20.610339   31362 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:05:20.610624   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.610680   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.624999   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0603 11:05:20.625382   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.625815   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.625829   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.626082   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.626264   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:05:20.628896   31362 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:20.629303   31362 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:05:20.629352   31362 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:20.629494   31362 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:05:20.629802   31362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:20.629839   31362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:20.643857   31362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0603 11:05:20.644271   31362 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:20.644666   31362 main.go:141] libmachine: Using API Version  1
	I0603 11:05:20.644688   31362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:20.644960   31362 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:20.645132   31362 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:05:20.645306   31362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:05:20.645323   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:05:20.647934   31362 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:20.648289   31362 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:05:20.648315   31362 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:20.648473   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:05:20.648622   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:05:20.648769   31362 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:05:20.648918   31362 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:05:20.730656   31362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:05:20.745333   31362 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 logs -n 25: (1.405367655s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m03_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m04 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp testdata/cp-test.txt                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m03 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683480 node stop m02 -v=7                                                     | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683480 node start m02 -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:56:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:56:14.465928   25542 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:56:14.466039   25542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:56:14.466047   25542 out.go:304] Setting ErrFile to fd 2...
	I0603 10:56:14.466051   25542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:56:14.466228   25542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:56:14.466732   25542 out.go:298] Setting JSON to false
	I0603 10:56:14.467577   25542 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2319,"bootTime":1717409855,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:56:14.467637   25542 start.go:139] virtualization: kvm guest
	I0603 10:56:14.469737   25542 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:56:14.471061   25542 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:56:14.471060   25542 notify.go:220] Checking for updates...
	I0603 10:56:14.472444   25542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:56:14.473752   25542 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:56:14.474992   25542 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.476189   25542 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:56:14.477378   25542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:56:14.478650   25542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:56:14.511229   25542 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 10:56:14.512537   25542 start.go:297] selected driver: kvm2
	I0603 10:56:14.512557   25542 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:56:14.512567   25542 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:56:14.513197   25542 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:56:14.513254   25542 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:56:14.526768   25542 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:56:14.526811   25542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:56:14.526980   25542 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:56:14.527003   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:56:14.527009   25542 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 10:56:14.527018   25542 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 10:56:14.527108   25542 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0603 10:56:14.527207   25542 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:56:14.528755   25542 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 10:56:14.529843   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:56:14.529864   25542 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 10:56:14.529872   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:56:14.529928   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:56:14.529938   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:56:14.530229   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:56:14.530249   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json: {Name:mk0c15c4828c27d5c6cc73cead395c2c3f3ae011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:14.530365   25542 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:56:14.530391   25542 start.go:364] duration metric: took 14.272µs to acquireMachinesLock for "ha-683480"
	I0603 10:56:14.530403   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:56:14.530452   25542 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 10:56:14.531975   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:56:14.532077   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:56:14.532120   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:56:14.545552   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I0603 10:56:14.545888   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:56:14.546453   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:56:14.546473   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:56:14.546759   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:56:14.546938   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:14.547090   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:14.547240   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:56:14.547263   25542 client.go:168] LocalClient.Create starting
	I0603 10:56:14.547290   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:56:14.547315   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:56:14.547328   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:56:14.547370   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:56:14.547386   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:56:14.547395   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:56:14.547418   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:56:14.547428   25542 main.go:141] libmachine: (ha-683480) Calling .PreCreateCheck
	I0603 10:56:14.547774   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:14.548111   25542 main.go:141] libmachine: Creating machine...
	I0603 10:56:14.548122   25542 main.go:141] libmachine: (ha-683480) Calling .Create
	I0603 10:56:14.548229   25542 main.go:141] libmachine: (ha-683480) Creating KVM machine...
	I0603 10:56:14.549209   25542 main.go:141] libmachine: (ha-683480) DBG | found existing default KVM network
	I0603 10:56:14.549753   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.549647   25565 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0603 10:56:14.549774   25542 main.go:141] libmachine: (ha-683480) DBG | created network xml: 
	I0603 10:56:14.549787   25542 main.go:141] libmachine: (ha-683480) DBG | <network>
	I0603 10:56:14.549793   25542 main.go:141] libmachine: (ha-683480) DBG |   <name>mk-ha-683480</name>
	I0603 10:56:14.549797   25542 main.go:141] libmachine: (ha-683480) DBG |   <dns enable='no'/>
	I0603 10:56:14.549802   25542 main.go:141] libmachine: (ha-683480) DBG |   
	I0603 10:56:14.549807   25542 main.go:141] libmachine: (ha-683480) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 10:56:14.549813   25542 main.go:141] libmachine: (ha-683480) DBG |     <dhcp>
	I0603 10:56:14.549818   25542 main.go:141] libmachine: (ha-683480) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 10:56:14.549834   25542 main.go:141] libmachine: (ha-683480) DBG |     </dhcp>
	I0603 10:56:14.549843   25542 main.go:141] libmachine: (ha-683480) DBG |   </ip>
	I0603 10:56:14.549849   25542 main.go:141] libmachine: (ha-683480) DBG |   
	I0603 10:56:14.549863   25542 main.go:141] libmachine: (ha-683480) DBG | </network>
	I0603 10:56:14.549872   25542 main.go:141] libmachine: (ha-683480) DBG | 
	I0603 10:56:14.554395   25542 main.go:141] libmachine: (ha-683480) DBG | trying to create private KVM network mk-ha-683480 192.168.39.0/24...
	I0603 10:56:14.616093   25542 main.go:141] libmachine: (ha-683480) DBG | private KVM network mk-ha-683480 192.168.39.0/24 created
	I0603 10:56:14.616122   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.616072   25565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.616133   25542 main.go:141] libmachine: (ha-683480) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 ...
	I0603 10:56:14.616148   25542 main.go:141] libmachine: (ha-683480) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:56:14.616279   25542 main.go:141] libmachine: (ha-683480) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:56:14.843163   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.843021   25565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa...
	I0603 10:56:14.951771   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.951623   25565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/ha-683480.rawdisk...
	I0603 10:56:14.951807   25542 main.go:141] libmachine: (ha-683480) DBG | Writing magic tar header
	I0603 10:56:14.951822   25542 main.go:141] libmachine: (ha-683480) DBG | Writing SSH key tar header
	I0603 10:56:14.951834   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:14.951781   25565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 ...
	I0603 10:56:14.951866   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480
	I0603 10:56:14.951892   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:56:14.951909   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480 (perms=drwx------)
	I0603 10:56:14.951919   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:56:14.951933   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:56:14.951942   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:56:14.951952   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:56:14.951963   25542 main.go:141] libmachine: (ha-683480) DBG | Checking permissions on dir: /home
	I0603 10:56:14.951975   25542 main.go:141] libmachine: (ha-683480) DBG | Skipping /home - not owner
	I0603 10:56:14.951990   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:56:14.952050   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:56:14.952070   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:56:14.952082   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:56:14.952095   25542 main.go:141] libmachine: (ha-683480) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:56:14.952107   25542 main.go:141] libmachine: (ha-683480) Creating domain...
	I0603 10:56:14.953008   25542 main.go:141] libmachine: (ha-683480) define libvirt domain using xml: 
	I0603 10:56:14.953038   25542 main.go:141] libmachine: (ha-683480) <domain type='kvm'>
	I0603 10:56:14.953047   25542 main.go:141] libmachine: (ha-683480)   <name>ha-683480</name>
	I0603 10:56:14.953058   25542 main.go:141] libmachine: (ha-683480)   <memory unit='MiB'>2200</memory>
	I0603 10:56:14.953066   25542 main.go:141] libmachine: (ha-683480)   <vcpu>2</vcpu>
	I0603 10:56:14.953070   25542 main.go:141] libmachine: (ha-683480)   <features>
	I0603 10:56:14.953075   25542 main.go:141] libmachine: (ha-683480)     <acpi/>
	I0603 10:56:14.953079   25542 main.go:141] libmachine: (ha-683480)     <apic/>
	I0603 10:56:14.953084   25542 main.go:141] libmachine: (ha-683480)     <pae/>
	I0603 10:56:14.953092   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953099   25542 main.go:141] libmachine: (ha-683480)   </features>
	I0603 10:56:14.953103   25542 main.go:141] libmachine: (ha-683480)   <cpu mode='host-passthrough'>
	I0603 10:56:14.953107   25542 main.go:141] libmachine: (ha-683480)   
	I0603 10:56:14.953111   25542 main.go:141] libmachine: (ha-683480)   </cpu>
	I0603 10:56:14.953116   25542 main.go:141] libmachine: (ha-683480)   <os>
	I0603 10:56:14.953120   25542 main.go:141] libmachine: (ha-683480)     <type>hvm</type>
	I0603 10:56:14.953127   25542 main.go:141] libmachine: (ha-683480)     <boot dev='cdrom'/>
	I0603 10:56:14.953131   25542 main.go:141] libmachine: (ha-683480)     <boot dev='hd'/>
	I0603 10:56:14.953138   25542 main.go:141] libmachine: (ha-683480)     <bootmenu enable='no'/>
	I0603 10:56:14.953142   25542 main.go:141] libmachine: (ha-683480)   </os>
	I0603 10:56:14.953147   25542 main.go:141] libmachine: (ha-683480)   <devices>
	I0603 10:56:14.953158   25542 main.go:141] libmachine: (ha-683480)     <disk type='file' device='cdrom'>
	I0603 10:56:14.953165   25542 main.go:141] libmachine: (ha-683480)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/boot2docker.iso'/>
	I0603 10:56:14.953171   25542 main.go:141] libmachine: (ha-683480)       <target dev='hdc' bus='scsi'/>
	I0603 10:56:14.953177   25542 main.go:141] libmachine: (ha-683480)       <readonly/>
	I0603 10:56:14.953183   25542 main.go:141] libmachine: (ha-683480)     </disk>
	I0603 10:56:14.953189   25542 main.go:141] libmachine: (ha-683480)     <disk type='file' device='disk'>
	I0603 10:56:14.953199   25542 main.go:141] libmachine: (ha-683480)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:56:14.953208   25542 main.go:141] libmachine: (ha-683480)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/ha-683480.rawdisk'/>
	I0603 10:56:14.953215   25542 main.go:141] libmachine: (ha-683480)       <target dev='hda' bus='virtio'/>
	I0603 10:56:14.953220   25542 main.go:141] libmachine: (ha-683480)     </disk>
	I0603 10:56:14.953225   25542 main.go:141] libmachine: (ha-683480)     <interface type='network'>
	I0603 10:56:14.953231   25542 main.go:141] libmachine: (ha-683480)       <source network='mk-ha-683480'/>
	I0603 10:56:14.953241   25542 main.go:141] libmachine: (ha-683480)       <model type='virtio'/>
	I0603 10:56:14.953246   25542 main.go:141] libmachine: (ha-683480)     </interface>
	I0603 10:56:14.953255   25542 main.go:141] libmachine: (ha-683480)     <interface type='network'>
	I0603 10:56:14.953261   25542 main.go:141] libmachine: (ha-683480)       <source network='default'/>
	I0603 10:56:14.953273   25542 main.go:141] libmachine: (ha-683480)       <model type='virtio'/>
	I0603 10:56:14.953281   25542 main.go:141] libmachine: (ha-683480)     </interface>
	I0603 10:56:14.953285   25542 main.go:141] libmachine: (ha-683480)     <serial type='pty'>
	I0603 10:56:14.953297   25542 main.go:141] libmachine: (ha-683480)       <target port='0'/>
	I0603 10:56:14.953302   25542 main.go:141] libmachine: (ha-683480)     </serial>
	I0603 10:56:14.953307   25542 main.go:141] libmachine: (ha-683480)     <console type='pty'>
	I0603 10:56:14.953314   25542 main.go:141] libmachine: (ha-683480)       <target type='serial' port='0'/>
	I0603 10:56:14.953321   25542 main.go:141] libmachine: (ha-683480)     </console>
	I0603 10:56:14.953328   25542 main.go:141] libmachine: (ha-683480)     <rng model='virtio'>
	I0603 10:56:14.953334   25542 main.go:141] libmachine: (ha-683480)       <backend model='random'>/dev/random</backend>
	I0603 10:56:14.953339   25542 main.go:141] libmachine: (ha-683480)     </rng>
	I0603 10:56:14.953345   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953354   25542 main.go:141] libmachine: (ha-683480)     
	I0603 10:56:14.953359   25542 main.go:141] libmachine: (ha-683480)   </devices>
	I0603 10:56:14.953368   25542 main.go:141] libmachine: (ha-683480) </domain>
	I0603 10:56:14.953380   25542 main.go:141] libmachine: (ha-683480) 
	I0603 10:56:14.957670   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:2d:ce:50 in network default
	I0603 10:56:14.958244   25542 main.go:141] libmachine: (ha-683480) Ensuring networks are active...
	I0603 10:56:14.958260   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:14.958904   25542 main.go:141] libmachine: (ha-683480) Ensuring network default is active
	I0603 10:56:14.959395   25542 main.go:141] libmachine: (ha-683480) Ensuring network mk-ha-683480 is active
	I0603 10:56:14.959879   25542 main.go:141] libmachine: (ha-683480) Getting domain xml...
	I0603 10:56:14.960577   25542 main.go:141] libmachine: (ha-683480) Creating domain...
	I0603 10:56:16.122048   25542 main.go:141] libmachine: (ha-683480) Waiting to get IP...
	I0603 10:56:16.122806   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.123253   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.123298   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.123236   25565 retry.go:31] will retry after 285.048907ms: waiting for machine to come up
	I0603 10:56:16.409805   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.410165   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.410203   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.410143   25565 retry.go:31] will retry after 257.029676ms: waiting for machine to come up
	I0603 10:56:16.668480   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:16.668955   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:16.668994   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:16.668907   25565 retry.go:31] will retry after 364.079168ms: waiting for machine to come up
	I0603 10:56:17.034445   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:17.034807   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:17.034831   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:17.034761   25565 retry.go:31] will retry after 368.572252ms: waiting for machine to come up
	I0603 10:56:17.405421   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:17.405973   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:17.406014   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:17.405926   25565 retry.go:31] will retry after 654.377154ms: waiting for machine to come up
	I0603 10:56:18.062010   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:18.062406   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:18.062443   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:18.062376   25565 retry.go:31] will retry after 945.231342ms: waiting for machine to come up
	I0603 10:56:19.009418   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:19.009809   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:19.009856   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:19.009762   25565 retry.go:31] will retry after 950.938623ms: waiting for machine to come up
	I0603 10:56:19.962347   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:19.962771   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:19.962792   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:19.962742   25565 retry.go:31] will retry after 926.994312ms: waiting for machine to come up
	I0603 10:56:20.891027   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:20.891482   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:20.891503   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:20.891434   25565 retry.go:31] will retry after 1.168197229s: waiting for machine to come up
	I0603 10:56:22.061741   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:22.062117   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:22.062148   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:22.062087   25565 retry.go:31] will retry after 2.194197242s: waiting for machine to come up
	I0603 10:56:24.259388   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:24.259830   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:24.259845   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:24.259810   25565 retry.go:31] will retry after 2.004867849s: waiting for machine to come up
	I0603 10:56:26.266608   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:26.266992   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:26.267013   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:26.266945   25565 retry.go:31] will retry after 2.227676044s: waiting for machine to come up
	I0603 10:56:28.497291   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:28.497708   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:28.497730   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:28.497657   25565 retry.go:31] will retry after 4.28187111s: waiting for machine to come up
	I0603 10:56:32.783402   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:32.783871   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find current IP address of domain ha-683480 in network mk-ha-683480
	I0603 10:56:32.783891   25542 main.go:141] libmachine: (ha-683480) DBG | I0603 10:56:32.783837   25565 retry.go:31] will retry after 5.257653046s: waiting for machine to come up
	I0603 10:56:38.047163   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.047562   25542 main.go:141] libmachine: (ha-683480) Found IP for machine: 192.168.39.116
	I0603 10:56:38.047579   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has current primary IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.047585   25542 main.go:141] libmachine: (ha-683480) Reserving static IP address...
	I0603 10:56:38.047902   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find host DHCP lease matching {name: "ha-683480", mac: "52:54:00:e5:3f:6a", ip: "192.168.39.116"} in network mk-ha-683480
	I0603 10:56:38.115112   25542 main.go:141] libmachine: (ha-683480) Reserved static IP address: 192.168.39.116
	I0603 10:56:38.115143   25542 main.go:141] libmachine: (ha-683480) Waiting for SSH to be available...
	I0603 10:56:38.115167   25542 main.go:141] libmachine: (ha-683480) DBG | Getting to WaitForSSH function...
	I0603 10:56:38.117475   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:38.117779   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480
	I0603 10:56:38.117810   25542 main.go:141] libmachine: (ha-683480) DBG | unable to find defined IP address of network mk-ha-683480 interface with MAC address 52:54:00:e5:3f:6a
	I0603 10:56:38.117870   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH client type: external
	I0603 10:56:38.117896   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa (-rw-------)
	I0603 10:56:38.117945   25542 main.go:141] libmachine: (ha-683480) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:56:38.117960   25542 main.go:141] libmachine: (ha-683480) DBG | About to run SSH command:
	I0603 10:56:38.117983   25542 main.go:141] libmachine: (ha-683480) DBG | exit 0
	I0603 10:56:38.121504   25542 main.go:141] libmachine: (ha-683480) DBG | SSH cmd err, output: exit status 255: 
	I0603 10:56:38.121525   25542 main.go:141] libmachine: (ha-683480) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0603 10:56:38.121532   25542 main.go:141] libmachine: (ha-683480) DBG | command : exit 0
	I0603 10:56:38.121541   25542 main.go:141] libmachine: (ha-683480) DBG | err     : exit status 255
	I0603 10:56:38.121547   25542 main.go:141] libmachine: (ha-683480) DBG | output  : 
	I0603 10:56:41.123142   25542 main.go:141] libmachine: (ha-683480) DBG | Getting to WaitForSSH function...
	I0603 10:56:41.125379   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.125739   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.125762   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.125889   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH client type: external
	I0603 10:56:41.125919   25542 main.go:141] libmachine: (ha-683480) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa (-rw-------)
	I0603 10:56:41.125959   25542 main.go:141] libmachine: (ha-683480) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:56:41.125973   25542 main.go:141] libmachine: (ha-683480) DBG | About to run SSH command:
	I0603 10:56:41.125987   25542 main.go:141] libmachine: (ha-683480) DBG | exit 0
	I0603 10:56:41.246929   25542 main.go:141] libmachine: (ha-683480) DBG | SSH cmd err, output: <nil>: 
	I0603 10:56:41.247185   25542 main.go:141] libmachine: (ha-683480) KVM machine creation complete!
	I0603 10:56:41.247555   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:41.248120   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:41.248311   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:41.248472   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:56:41.248487   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:56:41.249731   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:56:41.249747   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:56:41.249755   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:56:41.249761   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.251822   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.252116   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.252144   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.252271   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.252422   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.252565   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.252668   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.252813   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.253034   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.253046   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:56:41.350001   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:56:41.350019   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:56:41.350025   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.352309   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.352690   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.352716   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.352889   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.353078   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.353219   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.353356   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.353537   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.353715   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.353730   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:56:41.451228   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:56:41.451285   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:56:41.451295   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:56:41.451302   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.451520   25542 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 10:56:41.451534   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.451680   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.454319   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.454628   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.454654   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.454777   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.454925   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.455082   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.455211   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.455344   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.455505   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.455516   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 10:56:41.564791   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 10:56:41.564821   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.567404   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.567738   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.567766   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.567905   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.568088   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.568238   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.568414   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.568578   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:41.568771   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:41.568787   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:56:41.675419   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:56:41.675449   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:56:41.675467   25542 buildroot.go:174] setting up certificates
	I0603 10:56:41.675476   25542 provision.go:84] configureAuth start
	I0603 10:56:41.675484   25542 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 10:56:41.675773   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:41.677879   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.678224   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.678246   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.678378   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.680284   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.680554   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.680585   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.680698   25542 provision.go:143] copyHostCerts
	I0603 10:56:41.680736   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:56:41.680773   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:56:41.680784   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:56:41.680849   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:56:41.680942   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:56:41.680960   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:56:41.680966   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:56:41.680995   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:56:41.681033   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:56:41.681048   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:56:41.681054   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:56:41.681073   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:56:41.681122   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 10:56:41.980610   25542 provision.go:177] copyRemoteCerts
	I0603 10:56:41.980666   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:56:41.980691   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:41.983250   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.983579   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:41.983610   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:41.983713   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:41.983900   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:41.984059   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:41.984174   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.065833   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:56:42.065930   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:56:42.090238   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:56:42.090310   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0603 10:56:42.113467   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:56:42.113526   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 10:56:42.135642   25542 provision.go:87] duration metric: took 460.154058ms to configureAuth
	I0603 10:56:42.135662   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:56:42.135827   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:56:42.135907   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.138641   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.138939   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.138965   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.139114   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.139297   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.139464   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.139623   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.139801   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:42.139952   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:42.139966   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:56:42.399570   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:56:42.399613   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:56:42.399623   25542 main.go:141] libmachine: (ha-683480) Calling .GetURL
	I0603 10:56:42.400966   25542 main.go:141] libmachine: (ha-683480) DBG | Using libvirt version 6000000
	I0603 10:56:42.403271   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.403596   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.403617   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.403772   25542 main.go:141] libmachine: Docker is up and running!
	I0603 10:56:42.403788   25542 main.go:141] libmachine: Reticulating splines...
	I0603 10:56:42.403808   25542 client.go:171] duration metric: took 27.856538118s to LocalClient.Create
	I0603 10:56:42.403836   25542 start.go:167] duration metric: took 27.856596844s to libmachine.API.Create "ha-683480"
	I0603 10:56:42.403848   25542 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 10:56:42.403865   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:56:42.403886   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.404121   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:56:42.404141   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.406277   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.406605   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.406632   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.406743   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.406911   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.407079   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.407248   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.485188   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:56:42.489159   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:56:42.489184   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:56:42.489244   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:56:42.489327   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 10:56:42.489337   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 10:56:42.489433   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 10:56:42.498654   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:56:42.521037   25542 start.go:296] duration metric: took 117.175393ms for postStartSetup
	I0603 10:56:42.521088   25542 main.go:141] libmachine: (ha-683480) Calling .GetConfigRaw
	I0603 10:56:42.521611   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:42.524045   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.524380   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.524406   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.524583   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:56:42.524766   25542 start.go:128] duration metric: took 27.994305593s to createHost
	I0603 10:56:42.524788   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.526735   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.527027   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.527068   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.527199   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.527344   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.527477   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.527654   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.527807   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:56:42.528002   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 10:56:42.528013   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:56:42.627415   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412202.609097595
	
	I0603 10:56:42.627433   25542 fix.go:216] guest clock: 1717412202.609097595
	I0603 10:56:42.627441   25542 fix.go:229] Guest: 2024-06-03 10:56:42.609097595 +0000 UTC Remote: 2024-06-03 10:56:42.524778402 +0000 UTC m=+28.091417474 (delta=84.319193ms)
	I0603 10:56:42.627483   25542 fix.go:200] guest clock delta is within tolerance: 84.319193ms
	I0603 10:56:42.627491   25542 start.go:83] releasing machines lock for "ha-683480", held for 28.097092936s
	I0603 10:56:42.627516   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.627736   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:42.630073   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.630422   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.630450   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.630554   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.630954   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.631128   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:56:42.631209   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:56:42.631265   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.631291   25542 ssh_runner.go:195] Run: cat /version.json
	I0603 10:56:42.631310   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:56:42.633628   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.633946   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.633979   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634006   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634228   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.634347   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:42.634373   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:42.634398   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.634545   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:56:42.634554   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.634708   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:56:42.634705   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.634860   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:56:42.634993   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:56:42.708170   25542 ssh_runner.go:195] Run: systemctl --version
	I0603 10:56:42.731833   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:56:42.889105   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:56:42.895506   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:56:42.895572   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:56:42.912227   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:56:42.912245   25542 start.go:494] detecting cgroup driver to use...
	I0603 10:56:42.912303   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:56:42.927958   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:56:42.940924   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:56:42.940963   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:56:42.953568   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:56:42.966535   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:56:43.079194   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:56:43.239076   25542 docker.go:233] disabling docker service ...
	I0603 10:56:43.239138   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:56:43.253472   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:56:43.265915   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:56:43.378615   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:56:43.489311   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:56:43.503088   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:56:43.520846   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:56:43.520913   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.531032   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:56:43.531111   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.541395   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.551658   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.561729   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:56:43.572178   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.582365   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.598904   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:56:43.609044   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:56:43.618167   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:56:43.618204   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:56:43.630645   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:56:43.639855   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:56:43.747331   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:56:43.878164   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:56:43.878224   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:56:43.882915   25542 start.go:562] Will wait 60s for crictl version
	I0603 10:56:43.882965   25542 ssh_runner.go:195] Run: which crictl
	I0603 10:56:43.886667   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:56:43.931515   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:56:43.931597   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:56:43.958565   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:56:43.988172   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:56:43.989315   25542 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 10:56:43.991640   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:43.991964   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:56:43.991990   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:56:43.992200   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:56:43.996256   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:56:44.008861   25542 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 10:56:44.008953   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:56:44.008997   25542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:56:44.041143   25542 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 10:56:44.041209   25542 ssh_runner.go:195] Run: which lz4
	I0603 10:56:44.045081   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 10:56:44.045170   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 10:56:44.049359   25542 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 10:56:44.049383   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 10:56:45.405635   25542 crio.go:462] duration metric: took 1.360493385s to copy over tarball
	I0603 10:56:45.405698   25542 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 10:56:47.458922   25542 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05319829s)
	I0603 10:56:47.458954   25542 crio.go:469] duration metric: took 2.053292515s to extract the tarball
	I0603 10:56:47.458963   25542 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 10:56:47.498260   25542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 10:56:47.541753   25542 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 10:56:47.541779   25542 cache_images.go:84] Images are preloaded, skipping loading
	I0603 10:56:47.541788   25542 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 10:56:47.541906   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:56:47.541983   25542 ssh_runner.go:195] Run: crio config
	I0603 10:56:47.593386   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:56:47.593406   25542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 10:56:47.593414   25542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 10:56:47.593436   25542 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 10:56:47.593585   25542 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 10:56:47.593611   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 10:56:47.593646   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 10:56:47.612578   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 10:56:47.612679   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 10:56:47.612738   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:56:47.622669   25542 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 10:56:47.622725   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 10:56:47.632141   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 10:56:47.647848   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:56:47.663454   25542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 10:56:47.679259   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 10:56:47.694988   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 10:56:47.698620   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:56:47.710448   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:56:47.828098   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:56:47.844245   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 10:56:47.844270   25542 certs.go:194] generating shared ca certs ...
	I0603 10:56:47.844291   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:47.844468   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:56:47.844521   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:56:47.844534   25542 certs.go:256] generating profile certs ...
	I0603 10:56:47.844599   25542 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 10:56:47.844618   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt with IP's: []
	I0603 10:56:48.062533   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt ...
	I0603 10:56:48.062560   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt: {Name:mk5567ccfc9c4b9fcf1085bdad543fc3e68e1772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.062722   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key ...
	I0603 10:56:48.062733   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key: {Name:mkb56f24577c32390a1bb550ce6a067617b186f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.062809   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60
	I0603 10:56:48.062824   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.254]
	I0603 10:56:48.520493   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 ...
	I0603 10:56:48.520521   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60: {Name:mk9f3c195de608bf5816447c8c67f7100921af0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.520665   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60 ...
	I0603 10:56:48.520677   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60: {Name:mk1907058b2f028047f581cac4eeb38e528fcfc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.520745   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8fa2ae60 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 10:56:48.520826   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8fa2ae60 -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 10:56:48.520881   25542 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 10:56:48.520895   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt with IP's: []
	I0603 10:56:48.845023   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt ...
	I0603 10:56:48.845051   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt: {Name:mk6bc5663a3284bfe966796c7ffb8b75d9f5a053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.845203   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key ...
	I0603 10:56:48.845214   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key: {Name:mkc971faeb60f06145787f9880e809afdc0bbafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:56:48.845276   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 10:56:48.845292   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 10:56:48.845301   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 10:56:48.845314   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 10:56:48.845323   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 10:56:48.845336   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 10:56:48.845345   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 10:56:48.845354   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 10:56:48.845398   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 10:56:48.845430   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 10:56:48.845439   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:56:48.845460   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:56:48.845482   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:56:48.845502   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:56:48.845536   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:56:48.845562   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 10:56:48.845576   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:48.845588   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 10:56:48.846089   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:56:48.881541   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:56:48.907819   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:56:48.930607   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:56:48.954060   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 10:56:48.977147   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 10:56:49.000037   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:56:49.023238   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 10:56:49.046136   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 10:56:49.069087   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:56:49.091687   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 10:56:49.117389   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 10:56:49.133745   25542 ssh_runner.go:195] Run: openssl version
	I0603 10:56:49.139785   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:56:49.151514   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.156083   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.156126   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:56:49.162060   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:56:49.173439   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 10:56:49.184803   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.189274   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.189318   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 10:56:49.195094   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 10:56:49.206208   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 10:56:49.217268   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.221759   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.221805   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 10:56:49.227503   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 10:56:49.238402   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:56:49.242543   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:56:49.242598   25542 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:56:49.242699   25542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 10:56:49.242738   25542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 10:56:49.285323   25542 cri.go:89] found id: ""
	I0603 10:56:49.285398   25542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 10:56:49.297631   25542 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 10:56:49.307821   25542 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 10:56:49.317548   25542 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 10:56:49.317562   25542 kubeadm.go:156] found existing configuration files:
	
	I0603 10:56:49.317599   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 10:56:49.326953   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 10:56:49.327005   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 10:56:49.336406   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 10:56:49.345346   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 10:56:49.345395   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 10:56:49.355318   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 10:56:49.365001   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 10:56:49.365052   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 10:56:49.375216   25542 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 10:56:49.384102   25542 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 10:56:49.384141   25542 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 10:56:49.393669   25542 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 10:56:49.632642   25542 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 10:57:00.771093   25542 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 10:57:00.771149   25542 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 10:57:00.771258   25542 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 10:57:00.771398   25542 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 10:57:00.771535   25542 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 10:57:00.771614   25542 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 10:57:00.773037   25542 out.go:204]   - Generating certificates and keys ...
	I0603 10:57:00.773119   25542 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 10:57:00.773207   25542 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 10:57:00.773281   25542 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 10:57:00.773342   25542 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 10:57:00.773426   25542 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 10:57:00.773492   25542 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 10:57:00.773566   25542 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 10:57:00.773692   25542 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-683480 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0603 10:57:00.773766   25542 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 10:57:00.773896   25542 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-683480 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0603 10:57:00.773990   25542 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 10:57:00.774108   25542 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 10:57:00.774158   25542 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 10:57:00.774228   25542 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 10:57:00.774289   25542 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 10:57:00.774367   25542 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 10:57:00.774456   25542 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 10:57:00.774508   25542 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 10:57:00.774554   25542 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 10:57:00.774619   25542 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 10:57:00.774675   25542 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 10:57:00.775900   25542 out.go:204]   - Booting up control plane ...
	I0603 10:57:00.775991   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 10:57:00.776054   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 10:57:00.776132   25542 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 10:57:00.776240   25542 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 10:57:00.776342   25542 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 10:57:00.776410   25542 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 10:57:00.776547   25542 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 10:57:00.776632   25542 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 10:57:00.776717   25542 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.393485ms
	I0603 10:57:00.776804   25542 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 10:57:00.776888   25542 kubeadm.go:309] [api-check] The API server is healthy after 6.006452478s
	I0603 10:57:00.777028   25542 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 10:57:00.777187   25542 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 10:57:00.777266   25542 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 10:57:00.777492   25542 kubeadm.go:309] [mark-control-plane] Marking the node ha-683480 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 10:57:00.777559   25542 kubeadm.go:309] [bootstrap-token] Using token: q8elef.uwid3umlrwl04c9q
	I0603 10:57:00.778892   25542 out.go:204]   - Configuring RBAC rules ...
	I0603 10:57:00.778977   25542 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 10:57:00.779065   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 10:57:00.779221   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 10:57:00.779348   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 10:57:00.779489   25542 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 10:57:00.779590   25542 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 10:57:00.779731   25542 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 10:57:00.779774   25542 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 10:57:00.779817   25542 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 10:57:00.779823   25542 kubeadm.go:309] 
	I0603 10:57:00.779889   25542 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 10:57:00.779898   25542 kubeadm.go:309] 
	I0603 10:57:00.780017   25542 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 10:57:00.780026   25542 kubeadm.go:309] 
	I0603 10:57:00.780068   25542 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 10:57:00.780156   25542 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 10:57:00.780232   25542 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 10:57:00.780244   25542 kubeadm.go:309] 
	I0603 10:57:00.780332   25542 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 10:57:00.780351   25542 kubeadm.go:309] 
	I0603 10:57:00.780389   25542 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 10:57:00.780395   25542 kubeadm.go:309] 
	I0603 10:57:00.780437   25542 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 10:57:00.780498   25542 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 10:57:00.780557   25542 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 10:57:00.780563   25542 kubeadm.go:309] 
	I0603 10:57:00.780627   25542 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 10:57:00.780692   25542 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 10:57:00.780698   25542 kubeadm.go:309] 
	I0603 10:57:00.780763   25542 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q8elef.uwid3umlrwl04c9q \
	I0603 10:57:00.780860   25542 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 10:57:00.780904   25542 kubeadm.go:309] 	--control-plane 
	I0603 10:57:00.780918   25542 kubeadm.go:309] 
	I0603 10:57:00.781031   25542 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 10:57:00.781038   25542 kubeadm.go:309] 
	I0603 10:57:00.781120   25542 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q8elef.uwid3umlrwl04c9q \
	I0603 10:57:00.781237   25542 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 10:57:00.781248   25542 cni.go:84] Creating CNI manager for ""
	I0603 10:57:00.781253   25542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 10:57:00.782548   25542 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 10:57:00.783673   25542 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 10:57:00.789298   25542 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 10:57:00.789311   25542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 10:57:00.807718   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 10:57:01.177801   25542 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 10:57:01.177898   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:01.177928   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480 minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=true
	I0603 10:57:01.244001   25542 ops.go:34] apiserver oom_adj: -16
	I0603 10:57:01.350406   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:01.851349   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:02.351098   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:02.851145   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:03.350923   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:03.850929   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:04.350931   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:04.851245   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:05.351031   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:05.850412   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:06.350628   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:06.851025   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:07.350651   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:07.850624   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:08.350938   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:08.850778   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:09.350723   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:09.850455   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:10.351365   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:10.851028   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:11.351134   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:11.850482   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:12.351346   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:12.850434   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:13.350776   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 10:57:13.450935   25542 kubeadm.go:1107] duration metric: took 12.273102783s to wait for elevateKubeSystemPrivileges
	W0603 10:57:13.450966   25542 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 10:57:13.450975   25542 kubeadm.go:393] duration metric: took 24.208380078s to StartCluster
	I0603 10:57:13.450993   25542 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:13.451092   25542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:57:13.451638   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:13.451815   25542 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:57:13.451834   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 10:57:13.451848   25542 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 10:57:13.451905   25542 addons.go:69] Setting storage-provisioner=true in profile "ha-683480"
	I0603 10:57:13.451842   25542 start.go:240] waiting for startup goroutines ...
	I0603 10:57:13.451938   25542 addons.go:234] Setting addon storage-provisioner=true in "ha-683480"
	I0603 10:57:13.451943   25542 addons.go:69] Setting default-storageclass=true in profile "ha-683480"
	I0603 10:57:13.451965   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:13.451972   25542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-683480"
	I0603 10:57:13.452025   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:13.452281   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.452308   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.452315   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.452348   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.466989   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36615
	I0603 10:57:13.467031   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0603 10:57:13.467385   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.467466   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.467917   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.467945   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.468018   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.468039   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.468288   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.468484   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.468512   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.468982   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.469010   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.470567   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:57:13.470813   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 10:57:13.471258   25542 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 10:57:13.471439   25542 addons.go:234] Setting addon default-storageclass=true in "ha-683480"
	I0603 10:57:13.471468   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:13.471711   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.471745   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.483869   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0603 10:57:13.484336   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.484815   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.484843   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.485225   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.485430   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.486116   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0603 10:57:13.486483   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.486955   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.486977   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.487258   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:13.487324   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.489101   25542 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 10:57:13.487795   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:13.490256   25542 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:57:13.490269   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 10:57:13.490281   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:13.489136   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:13.493061   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.493530   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:13.493558   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.493702   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:13.493854   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:13.493983   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:13.494126   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:13.505344   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I0603 10:57:13.505727   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:13.506272   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:13.506290   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:13.506663   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:13.506840   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:13.508131   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:13.508300   25542 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 10:57:13.508313   25542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 10:57:13.508326   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:13.511018   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.511464   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:13.511490   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:13.511609   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:13.511742   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:13.511872   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:13.512027   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:13.577094   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 10:57:13.650514   25542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 10:57:13.694970   25542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 10:57:14.112758   25542 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 10:57:14.389503   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389530   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389534   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389545   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389827   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.389845   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.389853   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389860   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.389866   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.389904   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.389906   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.389921   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.389931   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.389941   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.391412   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.391429   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.391416   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.391444   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.391450   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.391575   25542 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 10:57:14.391591   25542 round_trippers.go:469] Request Headers:
	I0603 10:57:14.391602   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:57:14.391608   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:57:14.405075   25542 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 10:57:14.405550   25542 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 10:57:14.405564   25542 round_trippers.go:469] Request Headers:
	I0603 10:57:14.405571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:57:14.405575   25542 round_trippers.go:473]     Content-Type: application/json
	I0603 10:57:14.405579   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:57:14.408132   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:57:14.408343   25542 main.go:141] libmachine: Making call to close driver server
	I0603 10:57:14.408358   25542 main.go:141] libmachine: (ha-683480) Calling .Close
	I0603 10:57:14.408582   25542 main.go:141] libmachine: Successfully made call to close driver server
	I0603 10:57:14.408604   25542 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 10:57:14.408636   25542 main.go:141] libmachine: (ha-683480) DBG | Closing plugin on server side
	I0603 10:57:14.410150   25542 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 10:57:14.411371   25542 addons.go:510] duration metric: took 959.517318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 10:57:14.411421   25542 start.go:245] waiting for cluster config update ...
	I0603 10:57:14.411435   25542 start.go:254] writing updated cluster config ...
	I0603 10:57:14.412828   25542 out.go:177] 
	I0603 10:57:14.413974   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:14.414032   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:14.415360   25542 out.go:177] * Starting "ha-683480-m02" control-plane node in "ha-683480" cluster
	I0603 10:57:14.416242   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:57:14.416260   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:57:14.416323   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:57:14.416334   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:57:14.416393   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:14.416567   25542 start.go:360] acquireMachinesLock for ha-683480-m02: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:57:14.416614   25542 start.go:364] duration metric: took 26.687µs to acquireMachinesLock for "ha-683480-m02"
	I0603 10:57:14.416639   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:57:14.416727   25542 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0603 10:57:14.418166   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:57:14.418228   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:14.418250   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:14.432219   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0603 10:57:14.432598   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:14.433005   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:14.433027   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:14.433373   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:14.433550   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:14.433658   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:14.433802   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:57:14.433830   25542 client.go:168] LocalClient.Create starting
	I0603 10:57:14.433860   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:57:14.433896   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:57:14.433916   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:57:14.433978   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:57:14.434007   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:57:14.434024   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:57:14.434048   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:57:14.434060   25542 main.go:141] libmachine: (ha-683480-m02) Calling .PreCreateCheck
	I0603 10:57:14.434215   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:14.434590   25542 main.go:141] libmachine: Creating machine...
	I0603 10:57:14.434604   25542 main.go:141] libmachine: (ha-683480-m02) Calling .Create
	I0603 10:57:14.434715   25542 main.go:141] libmachine: (ha-683480-m02) Creating KVM machine...
	I0603 10:57:14.435989   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found existing default KVM network
	I0603 10:57:14.436155   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found existing private KVM network mk-ha-683480
	I0603 10:57:14.436266   25542 main.go:141] libmachine: (ha-683480-m02) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 ...
	I0603 10:57:14.436285   25542 main.go:141] libmachine: (ha-683480-m02) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:57:14.436354   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:14.436263   25955 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:57:14.436467   25542 main.go:141] libmachine: (ha-683480-m02) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:57:14.655395   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:14.655264   25955 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa...
	I0603 10:57:15.185299   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:15.185194   25955 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/ha-683480-m02.rawdisk...
	I0603 10:57:15.185331   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Writing magic tar header
	I0603 10:57:15.185347   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Writing SSH key tar header
	I0603 10:57:15.185360   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:15.185297   25955 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 ...
	I0603 10:57:15.185455   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02
	I0603 10:57:15.185489   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:57:15.185506   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02 (perms=drwx------)
	I0603 10:57:15.185525   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:57:15.185540   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:57:15.185554   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:57:15.185569   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:57:15.185582   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:57:15.185594   25542 main.go:141] libmachine: (ha-683480-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:57:15.185610   25542 main.go:141] libmachine: (ha-683480-m02) Creating domain...
	I0603 10:57:15.185627   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:57:15.185640   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:57:15.185654   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:57:15.185670   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Checking permissions on dir: /home
	I0603 10:57:15.185685   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Skipping /home - not owner
	I0603 10:57:15.186408   25542 main.go:141] libmachine: (ha-683480-m02) define libvirt domain using xml: 
	I0603 10:57:15.186431   25542 main.go:141] libmachine: (ha-683480-m02) <domain type='kvm'>
	I0603 10:57:15.186442   25542 main.go:141] libmachine: (ha-683480-m02)   <name>ha-683480-m02</name>
	I0603 10:57:15.186450   25542 main.go:141] libmachine: (ha-683480-m02)   <memory unit='MiB'>2200</memory>
	I0603 10:57:15.186458   25542 main.go:141] libmachine: (ha-683480-m02)   <vcpu>2</vcpu>
	I0603 10:57:15.186465   25542 main.go:141] libmachine: (ha-683480-m02)   <features>
	I0603 10:57:15.186473   25542 main.go:141] libmachine: (ha-683480-m02)     <acpi/>
	I0603 10:57:15.186479   25542 main.go:141] libmachine: (ha-683480-m02)     <apic/>
	I0603 10:57:15.186484   25542 main.go:141] libmachine: (ha-683480-m02)     <pae/>
	I0603 10:57:15.186488   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186495   25542 main.go:141] libmachine: (ha-683480-m02)   </features>
	I0603 10:57:15.186499   25542 main.go:141] libmachine: (ha-683480-m02)   <cpu mode='host-passthrough'>
	I0603 10:57:15.186504   25542 main.go:141] libmachine: (ha-683480-m02)   
	I0603 10:57:15.186512   25542 main.go:141] libmachine: (ha-683480-m02)   </cpu>
	I0603 10:57:15.186517   25542 main.go:141] libmachine: (ha-683480-m02)   <os>
	I0603 10:57:15.186522   25542 main.go:141] libmachine: (ha-683480-m02)     <type>hvm</type>
	I0603 10:57:15.186527   25542 main.go:141] libmachine: (ha-683480-m02)     <boot dev='cdrom'/>
	I0603 10:57:15.186533   25542 main.go:141] libmachine: (ha-683480-m02)     <boot dev='hd'/>
	I0603 10:57:15.186542   25542 main.go:141] libmachine: (ha-683480-m02)     <bootmenu enable='no'/>
	I0603 10:57:15.186546   25542 main.go:141] libmachine: (ha-683480-m02)   </os>
	I0603 10:57:15.186551   25542 main.go:141] libmachine: (ha-683480-m02)   <devices>
	I0603 10:57:15.186558   25542 main.go:141] libmachine: (ha-683480-m02)     <disk type='file' device='cdrom'>
	I0603 10:57:15.186565   25542 main.go:141] libmachine: (ha-683480-m02)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/boot2docker.iso'/>
	I0603 10:57:15.186570   25542 main.go:141] libmachine: (ha-683480-m02)       <target dev='hdc' bus='scsi'/>
	I0603 10:57:15.186578   25542 main.go:141] libmachine: (ha-683480-m02)       <readonly/>
	I0603 10:57:15.186585   25542 main.go:141] libmachine: (ha-683480-m02)     </disk>
	I0603 10:57:15.186591   25542 main.go:141] libmachine: (ha-683480-m02)     <disk type='file' device='disk'>
	I0603 10:57:15.186600   25542 main.go:141] libmachine: (ha-683480-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:57:15.186629   25542 main.go:141] libmachine: (ha-683480-m02)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/ha-683480-m02.rawdisk'/>
	I0603 10:57:15.186651   25542 main.go:141] libmachine: (ha-683480-m02)       <target dev='hda' bus='virtio'/>
	I0603 10:57:15.186665   25542 main.go:141] libmachine: (ha-683480-m02)     </disk>
	I0603 10:57:15.186677   25542 main.go:141] libmachine: (ha-683480-m02)     <interface type='network'>
	I0603 10:57:15.186689   25542 main.go:141] libmachine: (ha-683480-m02)       <source network='mk-ha-683480'/>
	I0603 10:57:15.186701   25542 main.go:141] libmachine: (ha-683480-m02)       <model type='virtio'/>
	I0603 10:57:15.186711   25542 main.go:141] libmachine: (ha-683480-m02)     </interface>
	I0603 10:57:15.186727   25542 main.go:141] libmachine: (ha-683480-m02)     <interface type='network'>
	I0603 10:57:15.186741   25542 main.go:141] libmachine: (ha-683480-m02)       <source network='default'/>
	I0603 10:57:15.186753   25542 main.go:141] libmachine: (ha-683480-m02)       <model type='virtio'/>
	I0603 10:57:15.186766   25542 main.go:141] libmachine: (ha-683480-m02)     </interface>
	I0603 10:57:15.186777   25542 main.go:141] libmachine: (ha-683480-m02)     <serial type='pty'>
	I0603 10:57:15.186790   25542 main.go:141] libmachine: (ha-683480-m02)       <target port='0'/>
	I0603 10:57:15.186806   25542 main.go:141] libmachine: (ha-683480-m02)     </serial>
	I0603 10:57:15.186820   25542 main.go:141] libmachine: (ha-683480-m02)     <console type='pty'>
	I0603 10:57:15.186831   25542 main.go:141] libmachine: (ha-683480-m02)       <target type='serial' port='0'/>
	I0603 10:57:15.186842   25542 main.go:141] libmachine: (ha-683480-m02)     </console>
	I0603 10:57:15.186856   25542 main.go:141] libmachine: (ha-683480-m02)     <rng model='virtio'>
	I0603 10:57:15.186871   25542 main.go:141] libmachine: (ha-683480-m02)       <backend model='random'>/dev/random</backend>
	I0603 10:57:15.186881   25542 main.go:141] libmachine: (ha-683480-m02)     </rng>
	I0603 10:57:15.186897   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186916   25542 main.go:141] libmachine: (ha-683480-m02)     
	I0603 10:57:15.186926   25542 main.go:141] libmachine: (ha-683480-m02)   </devices>
	I0603 10:57:15.186939   25542 main.go:141] libmachine: (ha-683480-m02) </domain>
	I0603 10:57:15.186953   25542 main.go:141] libmachine: (ha-683480-m02) 
	I0603 10:57:15.193041   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:3a:60:13 in network default
	I0603 10:57:15.193546   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring networks are active...
	I0603 10:57:15.193566   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:15.194084   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring network default is active
	I0603 10:57:15.194316   25542 main.go:141] libmachine: (ha-683480-m02) Ensuring network mk-ha-683480 is active
	I0603 10:57:15.194621   25542 main.go:141] libmachine: (ha-683480-m02) Getting domain xml...
	I0603 10:57:15.195250   25542 main.go:141] libmachine: (ha-683480-m02) Creating domain...
	I0603 10:57:16.367029   25542 main.go:141] libmachine: (ha-683480-m02) Waiting to get IP...
	I0603 10:57:16.367813   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.368282   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.368378   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.368290   25955 retry.go:31] will retry after 193.520583ms: waiting for machine to come up
	I0603 10:57:16.563737   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.564186   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.564211   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.564136   25955 retry.go:31] will retry after 307.356676ms: waiting for machine to come up
	I0603 10:57:16.873758   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:16.874264   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:16.874284   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:16.874225   25955 retry.go:31] will retry after 472.611486ms: waiting for machine to come up
	I0603 10:57:17.349612   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:17.350085   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:17.350120   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:17.350025   25955 retry.go:31] will retry after 591.878376ms: waiting for machine to come up
	I0603 10:57:17.943698   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:17.944257   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:17.944284   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:17.944211   25955 retry.go:31] will retry after 519.190327ms: waiting for machine to come up
	I0603 10:57:18.464918   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:18.465352   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:18.465378   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:18.465309   25955 retry.go:31] will retry after 731.947356ms: waiting for machine to come up
	I0603 10:57:19.199086   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:19.199606   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:19.199663   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:19.199578   25955 retry.go:31] will retry after 811.745735ms: waiting for machine to come up
	I0603 10:57:20.012877   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:20.013282   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:20.013311   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:20.013223   25955 retry.go:31] will retry after 1.069722903s: waiting for machine to come up
	I0603 10:57:21.084068   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:21.084430   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:21.084455   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:21.084391   25955 retry.go:31] will retry after 1.701630144s: waiting for machine to come up
	I0603 10:57:22.788183   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:22.788532   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:22.788560   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:22.788496   25955 retry.go:31] will retry after 2.200034704s: waiting for machine to come up
	I0603 10:57:24.990706   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:24.991153   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:24.991180   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:24.991102   25955 retry.go:31] will retry after 2.006922002s: waiting for machine to come up
	I0603 10:57:27.000099   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:27.000520   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:27.000551   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:27.000478   25955 retry.go:31] will retry after 3.012739848s: waiting for machine to come up
	I0603 10:57:30.014260   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:30.014617   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:30.014645   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:30.014569   25955 retry.go:31] will retry after 3.749957057s: waiting for machine to come up
	I0603 10:57:33.768377   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:33.768786   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find current IP address of domain ha-683480-m02 in network mk-ha-683480
	I0603 10:57:33.768814   25542 main.go:141] libmachine: (ha-683480-m02) DBG | I0603 10:57:33.768748   25955 retry.go:31] will retry after 4.367337728s: waiting for machine to come up
	I0603 10:57:38.140449   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.140780   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has current primary IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.140812   25542 main.go:141] libmachine: (ha-683480-m02) Found IP for machine: 192.168.39.127
	I0603 10:57:38.140826   25542 main.go:141] libmachine: (ha-683480-m02) Reserving static IP address...
	I0603 10:57:38.141205   25542 main.go:141] libmachine: (ha-683480-m02) DBG | unable to find host DHCP lease matching {name: "ha-683480-m02", mac: "52:54:00:00:55:50", ip: "192.168.39.127"} in network mk-ha-683480
	I0603 10:57:38.210897   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Getting to WaitForSSH function...
	I0603 10:57:38.210931   25542 main.go:141] libmachine: (ha-683480-m02) Reserved static IP address: 192.168.39.127
	I0603 10:57:38.210944   25542 main.go:141] libmachine: (ha-683480-m02) Waiting for SSH to be available...
	I0603 10:57:38.213534   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.213888   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.213910   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.214073   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using SSH client type: external
	I0603 10:57:38.214097   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa (-rw-------)
	I0603 10:57:38.214129   25542 main.go:141] libmachine: (ha-683480-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:57:38.214139   25542 main.go:141] libmachine: (ha-683480-m02) DBG | About to run SSH command:
	I0603 10:57:38.214149   25542 main.go:141] libmachine: (ha-683480-m02) DBG | exit 0
	I0603 10:57:38.339014   25542 main.go:141] libmachine: (ha-683480-m02) DBG | SSH cmd err, output: <nil>: 
	I0603 10:57:38.339297   25542 main.go:141] libmachine: (ha-683480-m02) KVM machine creation complete!
	I0603 10:57:38.339651   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:38.340266   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:38.340453   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:38.340608   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:57:38.340624   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 10:57:38.341870   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:57:38.341886   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:57:38.341897   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:57:38.341907   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.344129   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.344460   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.344484   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.344614   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.344772   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.344907   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.345048   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.345204   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.345429   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.345447   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:57:38.454187   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:57:38.454213   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:57:38.454226   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.457069   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.457474   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.457502   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.457644   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.457862   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.458059   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.458221   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.458416   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.458568   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.458578   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:57:38.567543   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:57:38.567592   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:57:38.567598   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:57:38.567605   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.567885   25542 buildroot.go:166] provisioning hostname "ha-683480-m02"
	I0603 10:57:38.567912   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.568110   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.570679   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.571058   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.571091   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.571206   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.571372   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.571513   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.571611   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.571766   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.571925   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.571938   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480-m02 && echo "ha-683480-m02" | sudo tee /etc/hostname
	I0603 10:57:38.699854   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480-m02
	
	I0603 10:57:38.699883   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.702515   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.702888   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.702914   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.703106   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:38.703302   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.703430   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:38.703574   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:38.703754   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:38.703899   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:38.703914   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:57:38.825075   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:57:38.825101   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:57:38.825120   25542 buildroot.go:174] setting up certificates
	I0603 10:57:38.825130   25542 provision.go:84] configureAuth start
	I0603 10:57:38.825142   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetMachineName
	I0603 10:57:38.825434   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:38.827697   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.828103   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.828124   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.828205   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:38.830403   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.830720   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:38.830747   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:38.830907   25542 provision.go:143] copyHostCerts
	I0603 10:57:38.830943   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:57:38.830981   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:57:38.830993   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:57:38.831090   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:57:38.831210   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:57:38.831236   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:57:38.831243   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:57:38.831285   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:57:38.831358   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:57:38.831381   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:57:38.831390   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:57:38.831423   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:57:38.831488   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480-m02 san=[127.0.0.1 192.168.39.127 ha-683480-m02 localhost minikube]
	I0603 10:57:39.107965   25542 provision.go:177] copyRemoteCerts
	I0603 10:57:39.108014   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:57:39.108035   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.110672   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.111004   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.111027   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.111216   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.111402   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.111574   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.111710   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.197801   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:57:39.197910   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:57:39.222353   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:57:39.222414   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:57:39.245793   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:57:39.245849   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 10:57:39.269140   25542 provision.go:87] duration metric: took 443.997515ms to configureAuth
	I0603 10:57:39.269166   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:57:39.269358   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:39.269435   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.271993   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.272380   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.272405   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.272569   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.272752   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.272923   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.273026   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.273151   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:39.273294   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:39.273307   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 10:57:39.545206   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 10:57:39.545231   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 10:57:39.545241   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetURL
	I0603 10:57:39.546449   25542 main.go:141] libmachine: (ha-683480-m02) DBG | Using libvirt version 6000000
	I0603 10:57:39.548732   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.548969   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.548992   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.549112   25542 main.go:141] libmachine: Docker is up and running!
	I0603 10:57:39.549133   25542 main.go:141] libmachine: Reticulating splines...
	I0603 10:57:39.549141   25542 client.go:171] duration metric: took 25.115303041s to LocalClient.Create
	I0603 10:57:39.549168   25542 start.go:167] duration metric: took 25.115364199s to libmachine.API.Create "ha-683480"
	I0603 10:57:39.549180   25542 start.go:293] postStartSetup for "ha-683480-m02" (driver="kvm2")
	I0603 10:57:39.549189   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 10:57:39.549214   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.549468   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 10:57:39.549497   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.551413   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.551696   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.551724   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.551851   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.552006   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.552150   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.552272   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.637726   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 10:57:39.641942   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 10:57:39.641968   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 10:57:39.642050   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 10:57:39.642123   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 10:57:39.642134   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 10:57:39.642213   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 10:57:39.653742   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:57:39.678630   25542 start.go:296] duration metric: took 129.438249ms for postStartSetup
	I0603 10:57:39.678681   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetConfigRaw
	I0603 10:57:39.679251   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:39.681795   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.682126   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.682154   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.682417   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:57:39.682615   25542 start.go:128] duration metric: took 25.265871916s to createHost
	I0603 10:57:39.682648   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.684431   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.684696   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.684718   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.684848   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.685001   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.685174   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.685302   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.685451   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:57:39.685594   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 10:57:39.685603   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 10:57:39.795626   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412259.774276019
	
	I0603 10:57:39.795648   25542 fix.go:216] guest clock: 1717412259.774276019
	I0603 10:57:39.795657   25542 fix.go:229] Guest: 2024-06-03 10:57:39.774276019 +0000 UTC Remote: 2024-06-03 10:57:39.682626665 +0000 UTC m=+85.249265737 (delta=91.649354ms)
	I0603 10:57:39.795677   25542 fix.go:200] guest clock delta is within tolerance: 91.649354ms
	I0603 10:57:39.795683   25542 start.go:83] releasing machines lock for "ha-683480-m02", held for 25.379057048s
	I0603 10:57:39.795701   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.795919   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:39.798489   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.798870   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.798900   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.801119   25542 out.go:177] * Found network options:
	I0603 10:57:39.802274   25542 out.go:177]   - NO_PROXY=192.168.39.116
	W0603 10:57:39.803350   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 10:57:39.803374   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.803860   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.804050   25542 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 10:57:39.804125   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 10:57:39.804165   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	W0603 10:57:39.804248   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 10:57:39.804302   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 10:57:39.804316   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 10:57:39.806531   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806823   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.806852   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806870   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.806942   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.807110   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.807247   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.807348   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:39.807371   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:39.807368   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:39.807512   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 10:57:39.807626   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 10:57:39.807761   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 10:57:39.807895   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 10:57:40.039708   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 10:57:40.046153   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 10:57:40.046209   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 10:57:40.061772   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 10:57:40.061788   25542 start.go:494] detecting cgroup driver to use...
	I0603 10:57:40.061842   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 10:57:40.076598   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 10:57:40.089894   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 10:57:40.089939   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 10:57:40.102789   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 10:57:40.115706   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 10:57:40.225777   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 10:57:40.385561   25542 docker.go:233] disabling docker service ...
	I0603 10:57:40.385622   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 10:57:40.399183   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 10:57:40.411841   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 10:57:40.523097   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 10:57:40.637561   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 10:57:40.652100   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 10:57:40.670295   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 10:57:40.670367   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.680163   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 10:57:40.680221   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.690290   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.700046   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.709768   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 10:57:40.719767   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.729463   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.746472   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 10:57:40.756435   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 10:57:40.765231   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 10:57:40.765319   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 10:57:40.777951   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 10:57:40.788285   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:57:40.905692   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 10:57:41.042950   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 10:57:41.043016   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 10:57:41.048023   25542 start.go:562] Will wait 60s for crictl version
	I0603 10:57:41.048076   25542 ssh_runner.go:195] Run: which crictl
	I0603 10:57:41.052322   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 10:57:41.096016   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 10:57:41.096103   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:57:41.124778   25542 ssh_runner.go:195] Run: crio --version
	I0603 10:57:41.155389   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 10:57:41.156685   25542 out.go:177]   - env NO_PROXY=192.168.39.116
	I0603 10:57:41.157904   25542 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 10:57:41.160497   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:41.160893   25542 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:57:29 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 10:57:41.160920   25542 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 10:57:41.161055   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 10:57:41.165366   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:57:41.178874   25542 mustload.go:65] Loading cluster: ha-683480
	I0603 10:57:41.179097   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:57:41.179344   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:41.179376   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:41.193764   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39411
	I0603 10:57:41.194198   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:41.194655   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:41.194675   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:41.194972   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:41.195171   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 10:57:41.196477   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:57:41.196781   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:57:41.196804   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:57:41.210511   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0603 10:57:41.210826   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:57:41.211266   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:57:41.211285   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:57:41.211553   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:57:41.211720   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:57:41.211879   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.127
	I0603 10:57:41.211890   25542 certs.go:194] generating shared ca certs ...
	I0603 10:57:41.211904   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.212011   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 10:57:41.212045   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 10:57:41.212054   25542 certs.go:256] generating profile certs ...
	I0603 10:57:41.212127   25542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 10:57:41.212151   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a
	I0603 10:57:41.212161   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.254]
	I0603 10:57:41.313930   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a ...
	I0603 10:57:41.313956   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a: {Name:mk82fc865ddfb68fa754de6f4eba20c9bc7c6964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.314111   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a ...
	I0603 10:57:41.314124   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a: {Name:mkd0f087221cc24ed79a087b514f4c1dd28e3227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:57:41.314194   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.0337487a -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 10:57:41.314317   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.0337487a -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 10:57:41.314442   25542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 10:57:41.314456   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 10:57:41.314469   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 10:57:41.314481   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 10:57:41.314494   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 10:57:41.314506   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 10:57:41.314518   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 10:57:41.314531   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 10:57:41.314542   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 10:57:41.314587   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 10:57:41.314614   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 10:57:41.314622   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 10:57:41.314644   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 10:57:41.314664   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 10:57:41.314686   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 10:57:41.314723   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 10:57:41.314748   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.314761   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.314773   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.314801   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:57:41.317527   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:41.317887   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:57:41.317911   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:57:41.318055   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:57:41.318249   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:57:41.318401   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:57:41.318542   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:57:41.387316   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 10:57:41.392115   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 10:57:41.403105   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 10:57:41.407356   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 10:57:41.417439   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 10:57:41.422132   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 10:57:41.432447   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 10:57:41.436665   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 10:57:41.446768   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 10:57:41.450991   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 10:57:41.461057   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 10:57:41.465043   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 10:57:41.474845   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 10:57:41.498864   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 10:57:41.521581   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 10:57:41.543831   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 10:57:41.566471   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 10:57:41.589239   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 10:57:41.611473   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 10:57:41.634713   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 10:57:41.657358   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 10:57:41.682172   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 10:57:41.705886   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 10:57:41.729122   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 10:57:41.745160   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 10:57:41.761102   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 10:57:41.778261   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 10:57:41.795474   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 10:57:41.811159   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 10:57:41.826896   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 10:57:41.842563   25542 ssh_runner.go:195] Run: openssl version
	I0603 10:57:41.848107   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 10:57:41.859616   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.864229   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.864272   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 10:57:41.870198   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 10:57:41.881198   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 10:57:41.891913   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.896305   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.896342   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 10:57:41.901756   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 10:57:41.914028   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 10:57:41.925655   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.931132   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.931180   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 10:57:41.938375   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 10:57:41.949776   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 10:57:41.953872   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 10:57:41.953915   25542 kubeadm.go:928] updating node {m02 192.168.39.127 8443 v1.30.1 crio true true} ...
	I0603 10:57:41.953983   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 10:57:41.954006   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 10:57:41.954038   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 10:57:41.971236   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 10:57:41.971313   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 10:57:41.971374   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 10:57:41.982282   25542 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 10:57:41.982344   25542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 10:57:41.993108   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 10:57:41.993121   25542 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0603 10:57:41.993133   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 10:57:41.993134   25542 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0603 10:57:41.993202   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 10:57:41.997423   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 10:57:41.997444   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 10:58:18.123910   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 10:58:18.123984   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 10:58:18.129999   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 10:58:18.130031   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 10:58:51.620104   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:58:51.638343   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 10:58:51.638421   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 10:58:51.642676   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 10:58:51.642719   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 10:58:52.033556   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 10:58:52.043954   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 10:58:52.060699   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 10:58:52.076907   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 10:58:52.092925   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 10:58:52.096916   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 10:58:52.109095   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:58:52.216097   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:58:52.233785   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 10:58:52.234283   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:58:52.234322   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:58:52.249709   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0603 10:58:52.250178   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:58:52.250679   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:58:52.250701   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:58:52.251084   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:58:52.251257   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 10:58:52.251417   25542 start.go:316] joinCluster: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:58:52.251534   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 10:58:52.251552   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 10:58:52.254492   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:58:52.254927   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 10:58:52.254957   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 10:58:52.255120   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 10:58:52.255297   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 10:58:52.255457   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 10:58:52.255630   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 10:58:52.445781   25542 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:58:52.445821   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 45r2ge.vg4p3ogqd7rtd0j6 --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m02 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0603 10:59:13.728022   25542 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 45r2ge.vg4p3ogqd7rtd0j6 --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m02 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (21.282174918s)
	I0603 10:59:13.728057   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 10:59:14.206114   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480-m02 minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=false
	I0603 10:59:14.348255   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683480-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 10:59:14.471943   25542 start.go:318] duration metric: took 22.22052051s to joinCluster
	I0603 10:59:14.472020   25542 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:59:14.473439   25542 out.go:177] * Verifying Kubernetes components...
	I0603 10:59:14.472321   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:14.474846   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 10:59:14.721737   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 10:59:14.795065   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:59:14.795409   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 10:59:14.795496   25542 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.116:8443
	I0603 10:59:14.795764   25542 node_ready.go:35] waiting up to 6m0s for node "ha-683480-m02" to be "Ready" ...
	I0603 10:59:14.795867   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:14.795879   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:14.795890   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:14.795899   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:14.805050   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 10:59:15.295982   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:15.296009   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:15.296022   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:15.296027   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:15.299923   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:15.796809   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:15.796827   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:15.796835   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:15.796839   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:15.800424   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:16.296386   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:16.296411   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:16.296423   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:16.296431   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:16.303226   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 10:59:16.796380   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:16.796405   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:16.796415   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:16.796420   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:16.799970   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:16.800631   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:17.296138   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:17.296158   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:17.296165   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:17.296169   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:17.299083   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:17.796246   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:17.796278   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:17.796286   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:17.796291   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:17.799561   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.296734   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:18.296758   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:18.296772   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:18.296777   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:18.299797   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.796932   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:18.796952   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:18.796960   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:18.796965   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:18.800420   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:18.801298   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:19.296296   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:19.296324   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:19.296341   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:19.296349   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:19.299486   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:19.796320   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:19.796343   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:19.796358   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:19.796363   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:19.799580   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:20.296268   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:20.296287   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:20.296294   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:20.296299   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:20.300385   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:20.795953   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:20.795973   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:20.795980   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:20.795986   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:20.799915   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:21.296836   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:21.296860   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:21.296871   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:21.296876   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:21.300005   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:21.300674   25542 node_ready.go:53] node "ha-683480-m02" has status "Ready":"False"
	I0603 10:59:21.796704   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:21.796744   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:21.796755   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:21.796759   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:21.801287   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:22.296935   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:22.296961   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:22.296971   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:22.296976   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:22.300249   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:22.796320   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:22.796345   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:22.796355   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:22.796361   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:22.799775   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.296004   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.296040   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.296059   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.296070   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.298896   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.299718   25542 node_ready.go:49] node "ha-683480-m02" has status "Ready":"True"
	I0603 10:59:23.299738   25542 node_ready.go:38] duration metric: took 8.503950937s for node "ha-683480-m02" to be "Ready" ...
	I0603 10:59:23.299746   25542 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:59:23.299819   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:23.299828   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.299835   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.299839   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.304439   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:23.314315   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.314394   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8tqf9
	I0603 10:59:23.314405   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.314415   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.314420   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.317907   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.318966   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.318984   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.318994   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.319001   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.323496   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:23.324549   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.324564   25542 pod_ready.go:81] duration metric: took 10.228856ms for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.324572   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.324631   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nff86
	I0603 10:59:23.324643   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.324652   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.324662   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.328454   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.329513   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.329529   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.329536   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.329538   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.331852   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.332369   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.332388   25542 pod_ready.go:81] duration metric: took 7.810532ms for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.332396   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.332446   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480
	I0603 10:59:23.332461   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.332468   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.332471   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.335249   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.336130   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:23.336145   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.336153   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.336157   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.338251   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.339238   25542 pod_ready.go:92] pod "etcd-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:23.339253   25542 pod_ready.go:81] duration metric: took 6.850947ms for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.339260   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:23.339296   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:23.339303   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.339310   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.339315   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.341437   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.341999   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.342014   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.342023   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.342028   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.344086   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:23.840344   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:23.840366   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.840373   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.840379   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.844013   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:23.844649   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:23.844666   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:23.844675   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:23.844679   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:23.847109   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:24.340086   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:24.340119   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.340130   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.340136   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.343747   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.344583   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:24.344640   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.344656   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.344662   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.348368   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.839425   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:24.839446   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.839454   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.839457   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.842962   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:24.843818   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:24.843834   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:24.843841   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:24.843845   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:24.845967   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:25.339767   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:25.339788   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.339795   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.339799   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.343451   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:25.344145   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:25.344159   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.344166   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.344171   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.346649   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:25.347192   25542 pod_ready.go:102] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:25.840345   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:25.840365   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.840373   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.840379   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.843818   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:25.844551   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:25.844564   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:25.844571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:25.844575   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:25.847117   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:26.339821   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:26.339842   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.339851   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.339858   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.343170   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:26.344188   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:26.344202   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.344209   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.344212   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.346674   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:26.839724   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:26.839749   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.839761   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.839767   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.843870   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 10:59:26.844499   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:26.844516   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:26.844525   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:26.844529   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:26.847150   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.339548   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:27.339576   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.339588   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.339594   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.343134   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:27.343904   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:27.343919   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.343925   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.343928   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.346294   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.840207   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:27.840227   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.840236   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.840240   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.843264   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:27.844032   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:27.844045   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:27.844052   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:27.844055   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:27.846553   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:27.847404   25542 pod_ready.go:102] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:28.339812   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 10:59:28.339835   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.339845   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.339849   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.343744   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:28.344554   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.344569   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.344578   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.344584   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.347597   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:28.348487   25542 pod_ready.go:92] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:28.348506   25542 pod_ready.go:81] duration metric: took 5.009239248s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.348519   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.348594   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480
	I0603 10:59:28.348604   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.348612   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.348622   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.351554   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.352108   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:28.352119   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.352126   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.352130   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.354481   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.354952   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:28.354967   25542 pod_ready.go:81] duration metric: took 6.4382ms for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.354978   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:28.355025   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:28.355053   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.355064   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.355077   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.357702   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.358628   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.358642   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.358648   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.358651   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.361294   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:28.855299   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:28.855320   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.855326   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.855332   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.860770   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:28.861457   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:28.861473   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:28.861483   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:28.861488   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:28.863987   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:29.355965   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:29.355990   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.355998   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.356002   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.359321   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:29.360078   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:29.360095   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.360102   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.360108   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.362872   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:29.855871   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:29.855890   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.855897   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.855902   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.859738   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:29.860540   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:29.860562   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:29.860571   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:29.860575   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:29.863400   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.355224   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:30.355244   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.355254   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.355261   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.358494   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:30.359284   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:30.359298   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.359307   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.359314   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.361972   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.362888   25542 pod_ready.go:102] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 10:59:30.855221   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:30.855243   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.855249   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.855251   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.858127   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:30.859190   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:30.859207   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:30.859216   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:30.859221   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:30.861744   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.355705   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 10:59:31.355725   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.355731   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.355735   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.358713   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.359512   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:31.359528   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.359535   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.359540   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.362020   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.362615   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.362632   25542 pod_ready.go:81] duration metric: took 3.007647373s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.362651   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.362697   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 10:59:31.362705   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.362712   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.362716   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.365357   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.365921   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.365935   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.365943   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.365946   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.368764   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.369301   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.369321   25542 pod_ready.go:81] duration metric: took 6.664259ms for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.369332   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.369396   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 10:59:31.369408   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.369418   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.369426   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.371750   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:31.372315   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:31.372329   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.372336   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.372344   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.374211   25542 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 10:59:31.374569   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.374583   25542 pod_ready.go:81] duration metric: took 5.245252ms for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.374591   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.496926   25542 request.go:629] Waited for 122.280858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 10:59:31.497012   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 10:59:31.497023   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.497033   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.497041   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.500710   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:31.696745   25542 request.go:629] Waited for 195.346971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.696802   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:31.696809   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.696819   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.696825   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.701940   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:31.702579   25542 pod_ready.go:92] pod "kube-proxy-4d9w5" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:31.702597   25542 pod_ready.go:81] duration metric: took 327.998436ms for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.702606   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:31.896850   25542 request.go:629] Waited for 194.166571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 10:59:31.896920   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 10:59:31.896927   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:31.896937   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:31.896944   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:31.900546   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.096495   25542 request.go:629] Waited for 195.389074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.096572   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.096579   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.096589   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.096598   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.100482   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.101116   25542 pod_ready.go:92] pod "kube-proxy-q2xfn" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.101134   25542 pod_ready.go:81] duration metric: took 398.517707ms for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.101143   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.296718   25542 request.go:629] Waited for 195.519197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 10:59:32.296800   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 10:59:32.296808   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.296816   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.296819   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.300389   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.496149   25542 request.go:629] Waited for 195.276948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:32.496209   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 10:59:32.496214   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.496221   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.496228   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.499362   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.500060   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.500079   25542 pod_ready.go:81] duration metric: took 398.928589ms for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.500089   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.696071   25542 request.go:629] Waited for 195.918544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 10:59:32.696124   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 10:59:32.696129   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.696143   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.696162   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.698879   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 10:59:32.896789   25542 request.go:629] Waited for 197.360174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.896868   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 10:59:32.896873   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.896880   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.896884   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.900609   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:32.901147   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 10:59:32.901165   25542 pod_ready.go:81] duration metric: took 401.068545ms for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 10:59:32.901174   25542 pod_ready.go:38] duration metric: took 9.601397971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 10:59:32.901187   25542 api_server.go:52] waiting for apiserver process to appear ...
	I0603 10:59:32.901243   25542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 10:59:32.916356   25542 api_server.go:72] duration metric: took 18.444296631s to wait for apiserver process to appear ...
	I0603 10:59:32.916376   25542 api_server.go:88] waiting for apiserver healthz status ...
	I0603 10:59:32.916395   25542 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0603 10:59:32.922078   25542 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0603 10:59:32.922151   25542 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0603 10:59:32.922162   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:32.922169   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:32.922175   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:32.922893   25542 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 10:59:32.922973   25542 api_server.go:141] control plane version: v1.30.1
	I0603 10:59:32.922987   25542 api_server.go:131] duration metric: took 6.604807ms to wait for apiserver health ...
	I0603 10:59:32.922995   25542 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 10:59:33.096395   25542 request.go:629] Waited for 173.338882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.096465   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.096473   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.096480   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.096485   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.105526   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 10:59:33.112685   25542 system_pods.go:59] 17 kube-system pods found
	I0603 10:59:33.112712   25542 system_pods.go:61] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 10:59:33.112717   25542 system_pods.go:61] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 10:59:33.112721   25542 system_pods.go:61] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 10:59:33.112724   25542 system_pods.go:61] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 10:59:33.112727   25542 system_pods.go:61] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 10:59:33.112731   25542 system_pods.go:61] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 10:59:33.112736   25542 system_pods.go:61] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 10:59:33.112741   25542 system_pods.go:61] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 10:59:33.112745   25542 system_pods.go:61] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 10:59:33.112755   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 10:59:33.112760   25542 system_pods.go:61] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 10:59:33.112768   25542 system_pods.go:61] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 10:59:33.112773   25542 system_pods.go:61] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 10:59:33.112779   25542 system_pods.go:61] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 10:59:33.112783   25542 system_pods.go:61] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 10:59:33.112790   25542 system_pods.go:61] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 10:59:33.112793   25542 system_pods.go:61] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 10:59:33.112798   25542 system_pods.go:74] duration metric: took 189.797613ms to wait for pod list to return data ...
	I0603 10:59:33.112808   25542 default_sa.go:34] waiting for default service account to be created ...
	I0603 10:59:33.296188   25542 request.go:629] Waited for 183.314921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 10:59:33.296246   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 10:59:33.296252   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.296259   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.296263   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.299696   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:33.299902   25542 default_sa.go:45] found service account: "default"
	I0603 10:59:33.299918   25542 default_sa.go:55] duration metric: took 187.10456ms for default service account to be created ...
	I0603 10:59:33.299926   25542 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 10:59:33.496401   25542 request.go:629] Waited for 196.414711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.496476   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 10:59:33.496484   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.496493   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.496503   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.501752   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 10:59:33.506723   25542 system_pods.go:86] 17 kube-system pods found
	I0603 10:59:33.506744   25542 system_pods.go:89] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 10:59:33.506750   25542 system_pods.go:89] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 10:59:33.506754   25542 system_pods.go:89] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 10:59:33.506758   25542 system_pods.go:89] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 10:59:33.506762   25542 system_pods.go:89] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 10:59:33.506766   25542 system_pods.go:89] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 10:59:33.506770   25542 system_pods.go:89] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 10:59:33.506774   25542 system_pods.go:89] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 10:59:33.506778   25542 system_pods.go:89] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 10:59:33.506783   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 10:59:33.506790   25542 system_pods.go:89] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 10:59:33.506793   25542 system_pods.go:89] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 10:59:33.506800   25542 system_pods.go:89] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 10:59:33.506804   25542 system_pods.go:89] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 10:59:33.506808   25542 system_pods.go:89] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 10:59:33.506812   25542 system_pods.go:89] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 10:59:33.506818   25542 system_pods.go:89] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 10:59:33.506824   25542 system_pods.go:126] duration metric: took 206.893332ms to wait for k8s-apps to be running ...
	I0603 10:59:33.506833   25542 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 10:59:33.506874   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 10:59:33.522011   25542 system_svc.go:56] duration metric: took 15.172648ms WaitForService to wait for kubelet
	I0603 10:59:33.522034   25542 kubeadm.go:576] duration metric: took 19.049980276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 10:59:33.522051   25542 node_conditions.go:102] verifying NodePressure condition ...
	I0603 10:59:33.696426   25542 request.go:629] Waited for 174.313958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0603 10:59:33.696475   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0603 10:59:33.696482   25542 round_trippers.go:469] Request Headers:
	I0603 10:59:33.696491   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 10:59:33.696498   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 10:59:33.699582   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 10:59:33.700490   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:59:33.700515   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 10:59:33.700528   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 10:59:33.700534   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 10:59:33.700540   25542 node_conditions.go:105] duration metric: took 178.484212ms to run NodePressure ...
	I0603 10:59:33.700555   25542 start.go:240] waiting for startup goroutines ...
	I0603 10:59:33.700584   25542 start.go:254] writing updated cluster config ...
	I0603 10:59:33.702645   25542 out.go:177] 
	I0603 10:59:33.704059   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:33.704171   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:59:33.705860   25542 out.go:177] * Starting "ha-683480-m03" control-plane node in "ha-683480" cluster
	I0603 10:59:33.706933   25542 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:59:33.706955   25542 cache.go:56] Caching tarball of preloaded images
	I0603 10:59:33.707063   25542 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 10:59:33.707077   25542 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 10:59:33.707166   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 10:59:33.707319   25542 start.go:360] acquireMachinesLock for ha-683480-m03: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 10:59:33.707367   25542 start.go:364] duration metric: took 30.727µs to acquireMachinesLock for "ha-683480-m03"
	I0603 10:59:33.707384   25542 start.go:93] Provisioning new machine with config: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 10:59:33.707462   25542 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0603 10:59:33.709131   25542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 10:59:33.709200   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:59:33.709230   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:59:33.723856   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
	I0603 10:59:33.724226   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:59:33.724678   25542 main.go:141] libmachine: Using API Version  1
	I0603 10:59:33.724698   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:59:33.724986   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:59:33.725174   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:33.725311   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:33.725473   25542 start.go:159] libmachine.API.Create for "ha-683480" (driver="kvm2")
	I0603 10:59:33.725503   25542 client.go:168] LocalClient.Create starting
	I0603 10:59:33.725540   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 10:59:33.725581   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:59:33.725603   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:59:33.725673   25542 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 10:59:33.725701   25542 main.go:141] libmachine: Decoding PEM data...
	I0603 10:59:33.725715   25542 main.go:141] libmachine: Parsing certificate...
	I0603 10:59:33.725741   25542 main.go:141] libmachine: Running pre-create checks...
	I0603 10:59:33.725750   25542 main.go:141] libmachine: (ha-683480-m03) Calling .PreCreateCheck
	I0603 10:59:33.725911   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 10:59:33.726288   25542 main.go:141] libmachine: Creating machine...
	I0603 10:59:33.726302   25542 main.go:141] libmachine: (ha-683480-m03) Calling .Create
	I0603 10:59:33.726418   25542 main.go:141] libmachine: (ha-683480-m03) Creating KVM machine...
	I0603 10:59:33.727558   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found existing default KVM network
	I0603 10:59:33.727701   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found existing private KVM network mk-ha-683480
	I0603 10:59:33.727806   25542 main.go:141] libmachine: (ha-683480-m03) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 ...
	I0603 10:59:33.727829   25542 main.go:141] libmachine: (ha-683480-m03) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:59:33.727889   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:33.727795   26612 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:59:33.727994   25542 main.go:141] libmachine: (ha-683480-m03) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 10:59:33.940122   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:33.939987   26612 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa...
	I0603 10:59:34.047316   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:34.047212   26612 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/ha-683480-m03.rawdisk...
	I0603 10:59:34.047349   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Writing magic tar header
	I0603 10:59:34.047365   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Writing SSH key tar header
	I0603 10:59:34.047377   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:34.047339   26612 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 ...
	I0603 10:59:34.047477   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03
	I0603 10:59:34.047502   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03 (perms=drwx------)
	I0603 10:59:34.047514   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 10:59:34.047526   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 10:59:34.047543   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:59:34.047558   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 10:59:34.047570   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 10:59:34.047584   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 10:59:34.047596   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home/jenkins
	I0603 10:59:34.047606   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 10:59:34.047650   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 10:59:34.047675   25542 main.go:141] libmachine: (ha-683480-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 10:59:34.047688   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Checking permissions on dir: /home
	I0603 10:59:34.047707   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Skipping /home - not owner
	I0603 10:59:34.047722   25542 main.go:141] libmachine: (ha-683480-m03) Creating domain...
	I0603 10:59:34.048481   25542 main.go:141] libmachine: (ha-683480-m03) define libvirt domain using xml: 
	I0603 10:59:34.048503   25542 main.go:141] libmachine: (ha-683480-m03) <domain type='kvm'>
	I0603 10:59:34.048513   25542 main.go:141] libmachine: (ha-683480-m03)   <name>ha-683480-m03</name>
	I0603 10:59:34.048520   25542 main.go:141] libmachine: (ha-683480-m03)   <memory unit='MiB'>2200</memory>
	I0603 10:59:34.048532   25542 main.go:141] libmachine: (ha-683480-m03)   <vcpu>2</vcpu>
	I0603 10:59:34.048543   25542 main.go:141] libmachine: (ha-683480-m03)   <features>
	I0603 10:59:34.048551   25542 main.go:141] libmachine: (ha-683480-m03)     <acpi/>
	I0603 10:59:34.048561   25542 main.go:141] libmachine: (ha-683480-m03)     <apic/>
	I0603 10:59:34.048571   25542 main.go:141] libmachine: (ha-683480-m03)     <pae/>
	I0603 10:59:34.048581   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.048609   25542 main.go:141] libmachine: (ha-683480-m03)   </features>
	I0603 10:59:34.048633   25542 main.go:141] libmachine: (ha-683480-m03)   <cpu mode='host-passthrough'>
	I0603 10:59:34.048644   25542 main.go:141] libmachine: (ha-683480-m03)   
	I0603 10:59:34.048654   25542 main.go:141] libmachine: (ha-683480-m03)   </cpu>
	I0603 10:59:34.048664   25542 main.go:141] libmachine: (ha-683480-m03)   <os>
	I0603 10:59:34.048669   25542 main.go:141] libmachine: (ha-683480-m03)     <type>hvm</type>
	I0603 10:59:34.048677   25542 main.go:141] libmachine: (ha-683480-m03)     <boot dev='cdrom'/>
	I0603 10:59:34.048683   25542 main.go:141] libmachine: (ha-683480-m03)     <boot dev='hd'/>
	I0603 10:59:34.048692   25542 main.go:141] libmachine: (ha-683480-m03)     <bootmenu enable='no'/>
	I0603 10:59:34.048703   25542 main.go:141] libmachine: (ha-683480-m03)   </os>
	I0603 10:59:34.048737   25542 main.go:141] libmachine: (ha-683480-m03)   <devices>
	I0603 10:59:34.048754   25542 main.go:141] libmachine: (ha-683480-m03)     <disk type='file' device='cdrom'>
	I0603 10:59:34.048763   25542 main.go:141] libmachine: (ha-683480-m03)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/boot2docker.iso'/>
	I0603 10:59:34.048775   25542 main.go:141] libmachine: (ha-683480-m03)       <target dev='hdc' bus='scsi'/>
	I0603 10:59:34.048790   25542 main.go:141] libmachine: (ha-683480-m03)       <readonly/>
	I0603 10:59:34.048801   25542 main.go:141] libmachine: (ha-683480-m03)     </disk>
	I0603 10:59:34.048814   25542 main.go:141] libmachine: (ha-683480-m03)     <disk type='file' device='disk'>
	I0603 10:59:34.048831   25542 main.go:141] libmachine: (ha-683480-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 10:59:34.048847   25542 main.go:141] libmachine: (ha-683480-m03)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/ha-683480-m03.rawdisk'/>
	I0603 10:59:34.048857   25542 main.go:141] libmachine: (ha-683480-m03)       <target dev='hda' bus='virtio'/>
	I0603 10:59:34.048866   25542 main.go:141] libmachine: (ha-683480-m03)     </disk>
	I0603 10:59:34.048876   25542 main.go:141] libmachine: (ha-683480-m03)     <interface type='network'>
	I0603 10:59:34.048886   25542 main.go:141] libmachine: (ha-683480-m03)       <source network='mk-ha-683480'/>
	I0603 10:59:34.048894   25542 main.go:141] libmachine: (ha-683480-m03)       <model type='virtio'/>
	I0603 10:59:34.048903   25542 main.go:141] libmachine: (ha-683480-m03)     </interface>
	I0603 10:59:34.048913   25542 main.go:141] libmachine: (ha-683480-m03)     <interface type='network'>
	I0603 10:59:34.048921   25542 main.go:141] libmachine: (ha-683480-m03)       <source network='default'/>
	I0603 10:59:34.048931   25542 main.go:141] libmachine: (ha-683480-m03)       <model type='virtio'/>
	I0603 10:59:34.048943   25542 main.go:141] libmachine: (ha-683480-m03)     </interface>
	I0603 10:59:34.048952   25542 main.go:141] libmachine: (ha-683480-m03)     <serial type='pty'>
	I0603 10:59:34.048970   25542 main.go:141] libmachine: (ha-683480-m03)       <target port='0'/>
	I0603 10:59:34.048983   25542 main.go:141] libmachine: (ha-683480-m03)     </serial>
	I0603 10:59:34.048996   25542 main.go:141] libmachine: (ha-683480-m03)     <console type='pty'>
	I0603 10:59:34.049007   25542 main.go:141] libmachine: (ha-683480-m03)       <target type='serial' port='0'/>
	I0603 10:59:34.049018   25542 main.go:141] libmachine: (ha-683480-m03)     </console>
	I0603 10:59:34.049028   25542 main.go:141] libmachine: (ha-683480-m03)     <rng model='virtio'>
	I0603 10:59:34.049038   25542 main.go:141] libmachine: (ha-683480-m03)       <backend model='random'>/dev/random</backend>
	I0603 10:59:34.049048   25542 main.go:141] libmachine: (ha-683480-m03)     </rng>
	I0603 10:59:34.049064   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.049079   25542 main.go:141] libmachine: (ha-683480-m03)     
	I0603 10:59:34.049090   25542 main.go:141] libmachine: (ha-683480-m03)   </devices>
	I0603 10:59:34.049101   25542 main.go:141] libmachine: (ha-683480-m03) </domain>
	I0603 10:59:34.049110   25542 main.go:141] libmachine: (ha-683480-m03) 
	I0603 10:59:34.055631   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:e4:91:52 in network default
	I0603 10:59:34.056194   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring networks are active...
	I0603 10:59:34.056219   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:34.056816   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring network default is active
	I0603 10:59:34.057061   25542 main.go:141] libmachine: (ha-683480-m03) Ensuring network mk-ha-683480 is active
	I0603 10:59:34.057457   25542 main.go:141] libmachine: (ha-683480-m03) Getting domain xml...
	I0603 10:59:34.058139   25542 main.go:141] libmachine: (ha-683480-m03) Creating domain...
	I0603 10:59:35.261242   25542 main.go:141] libmachine: (ha-683480-m03) Waiting to get IP...
	I0603 10:59:35.262263   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.262686   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.262723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.262671   26612 retry.go:31] will retry after 270.466843ms: waiting for machine to come up
	I0603 10:59:35.535155   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.535612   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.535640   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.535568   26612 retry.go:31] will retry after 381.295501ms: waiting for machine to come up
	I0603 10:59:35.918833   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:35.919263   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:35.919291   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:35.919226   26612 retry.go:31] will retry after 451.72106ms: waiting for machine to come up
	I0603 10:59:36.372620   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:36.373072   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:36.373095   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:36.373006   26612 retry.go:31] will retry after 446.571176ms: waiting for machine to come up
	I0603 10:59:36.821784   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:36.822324   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:36.822351   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:36.822274   26612 retry.go:31] will retry after 548.14234ms: waiting for machine to come up
	I0603 10:59:37.372079   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:37.372590   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:37.372616   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:37.372560   26612 retry.go:31] will retry after 733.157294ms: waiting for machine to come up
	I0603 10:59:38.106737   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:38.107283   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:38.107308   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:38.107228   26612 retry.go:31] will retry after 996.093829ms: waiting for machine to come up
	I0603 10:59:39.104880   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:39.105289   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:39.105319   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:39.105242   26612 retry.go:31] will retry after 1.256688018s: waiting for machine to come up
	I0603 10:59:40.363723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:40.364093   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:40.364122   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:40.364047   26612 retry.go:31] will retry after 1.306062946s: waiting for machine to come up
	I0603 10:59:41.672597   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:41.673027   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:41.673048   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:41.672986   26612 retry.go:31] will retry after 1.417549296s: waiting for machine to come up
	I0603 10:59:43.092276   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:43.092749   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:43.092770   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:43.092710   26612 retry.go:31] will retry after 1.859144814s: waiting for machine to come up
	I0603 10:59:44.952836   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:44.953234   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:44.953292   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:44.953206   26612 retry.go:31] will retry after 2.82862903s: waiting for machine to come up
	I0603 10:59:47.785131   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:47.785582   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:47.785609   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:47.785528   26612 retry.go:31] will retry after 2.808798994s: waiting for machine to come up
	I0603 10:59:50.596197   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:50.596659   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find current IP address of domain ha-683480-m03 in network mk-ha-683480
	I0603 10:59:50.596679   25542 main.go:141] libmachine: (ha-683480-m03) DBG | I0603 10:59:50.596618   26612 retry.go:31] will retry after 5.066420706s: waiting for machine to come up
	I0603 10:59:55.665614   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.666014   25542 main.go:141] libmachine: (ha-683480-m03) Found IP for machine: 192.168.39.131
	I0603 10:59:55.666043   25542 main.go:141] libmachine: (ha-683480-m03) Reserving static IP address...
	I0603 10:59:55.666058   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has current primary IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.666974   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find host DHCP lease matching {name: "ha-683480-m03", mac: "52:54:00:b4:3e:89", ip: "192.168.39.131"} in network mk-ha-683480
	I0603 10:59:55.738251   25542 main.go:141] libmachine: (ha-683480-m03) Reserved static IP address: 192.168.39.131
	I0603 10:59:55.738282   25542 main.go:141] libmachine: (ha-683480-m03) Waiting for SSH to be available...
	I0603 10:59:55.738292   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Getting to WaitForSSH function...
	I0603 10:59:55.740966   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:55.741387   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480
	I0603 10:59:55.741413   25542 main.go:141] libmachine: (ha-683480-m03) DBG | unable to find defined IP address of network mk-ha-683480 interface with MAC address 52:54:00:b4:3e:89
	I0603 10:59:55.741608   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH client type: external
	I0603 10:59:55.741640   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa (-rw-------)
	I0603 10:59:55.741701   25542 main.go:141] libmachine: (ha-683480-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:59:55.741723   25542 main.go:141] libmachine: (ha-683480-m03) DBG | About to run SSH command:
	I0603 10:59:55.741738   25542 main.go:141] libmachine: (ha-683480-m03) DBG | exit 0
	I0603 10:59:55.745088   25542 main.go:141] libmachine: (ha-683480-m03) DBG | SSH cmd err, output: exit status 255: 
	I0603 10:59:55.745119   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0603 10:59:55.745135   25542 main.go:141] libmachine: (ha-683480-m03) DBG | command : exit 0
	I0603 10:59:55.745149   25542 main.go:141] libmachine: (ha-683480-m03) DBG | err     : exit status 255
	I0603 10:59:55.745176   25542 main.go:141] libmachine: (ha-683480-m03) DBG | output  : 
	I0603 10:59:58.745558   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Getting to WaitForSSH function...
	I0603 10:59:58.747816   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.748193   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.748219   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.748352   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH client type: external
	I0603 10:59:58.748371   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa (-rw-------)
	I0603 10:59:58.748402   25542 main.go:141] libmachine: (ha-683480-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 10:59:58.748414   25542 main.go:141] libmachine: (ha-683480-m03) DBG | About to run SSH command:
	I0603 10:59:58.748429   25542 main.go:141] libmachine: (ha-683480-m03) DBG | exit 0
	I0603 10:59:58.871308   25542 main.go:141] libmachine: (ha-683480-m03) DBG | SSH cmd err, output: <nil>: 
	I0603 10:59:58.871548   25542 main.go:141] libmachine: (ha-683480-m03) KVM machine creation complete!
	I0603 10:59:58.871914   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 10:59:58.872491   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:58.872654   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 10:59:58.872778   25542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 10:59:58.872790   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 10:59:58.873878   25542 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 10:59:58.873893   25542 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 10:59:58.873900   25542 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 10:59:58.873909   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:58.876164   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.876567   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.876593   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.876707   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:58.876840   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.876955   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.877109   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:58.877293   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:58.877530   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:58.877548   25542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 10:59:58.978478   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:59:58.978505   25542 main.go:141] libmachine: Detecting the provisioner...
	I0603 10:59:58.978515   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:58.981143   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.981453   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:58.981478   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:58.981604   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:58.981783   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.981963   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:58.982113   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:58.982264   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:58.982441   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:58.982454   25542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 10:59:59.084016   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 10:59:59.084065   25542 main.go:141] libmachine: found compatible host: buildroot
	I0603 10:59:59.084072   25542 main.go:141] libmachine: Provisioning with buildroot...
	I0603 10:59:59.084078   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.084325   25542 buildroot.go:166] provisioning hostname "ha-683480-m03"
	I0603 10:59:59.084352   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.084547   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.087209   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.087572   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.087598   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.087717   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.087880   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.088037   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.088172   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.088313   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.088464   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.088475   25542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480-m03 && echo "ha-683480-m03" | sudo tee /etc/hostname
	I0603 10:59:59.207156   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480-m03
	
	I0603 10:59:59.207191   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.209845   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.210188   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.210211   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.210324   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.210508   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.210668   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.210837   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.211033   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.211233   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.211257   25542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 10:59:59.324738   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 10:59:59.324769   25542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 10:59:59.324787   25542 buildroot.go:174] setting up certificates
	I0603 10:59:59.324796   25542 provision.go:84] configureAuth start
	I0603 10:59:59.324804   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetMachineName
	I0603 10:59:59.325081   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 10:59:59.327591   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.327970   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.327996   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.328103   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.330395   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.330794   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.330813   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.330950   25542 provision.go:143] copyHostCerts
	I0603 10:59:59.330982   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:59:59.331013   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 10:59:59.331022   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 10:59:59.331112   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 10:59:59.331193   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:59:59.331212   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 10:59:59.331219   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 10:59:59.331243   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 10:59:59.331285   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:59:59.331306   25542 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 10:59:59.331312   25542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 10:59:59.331332   25542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 10:59:59.331379   25542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480-m03 san=[127.0.0.1 192.168.39.131 ha-683480-m03 localhost minikube]
	I0603 10:59:59.723359   25542 provision.go:177] copyRemoteCerts
	I0603 10:59:59.723413   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 10:59:59.723433   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.725988   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.726378   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.726403   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.726576   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.726745   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.726907   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.727015   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 10:59:59.809634   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 10:59:59.809715   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 10:59:59.836168   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 10:59:59.836228   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 10:59:59.861712   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 10:59:59.861786   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 10:59:59.885637   25542 provision.go:87] duration metric: took 560.829366ms to configureAuth
	I0603 10:59:59.885664   25542 buildroot.go:189] setting minikube options for container-runtime
	I0603 10:59:59.885915   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:59:59.886004   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 10:59:59.888576   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.888923   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 10:59:59.888955   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 10:59:59.889067   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 10:59:59.889274   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.889408   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 10:59:59.889572   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 10:59:59.889724   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 10:59:59.889869   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 10:59:59.889883   25542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:00:00.154556   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:00:00.154581   25542 main.go:141] libmachine: Checking connection to Docker...
	I0603 11:00:00.154591   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetURL
	I0603 11:00:00.156021   25542 main.go:141] libmachine: (ha-683480-m03) DBG | Using libvirt version 6000000
	I0603 11:00:00.158619   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.158977   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.158999   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.159212   25542 main.go:141] libmachine: Docker is up and running!
	I0603 11:00:00.159232   25542 main.go:141] libmachine: Reticulating splines...
	I0603 11:00:00.159240   25542 client.go:171] duration metric: took 26.433726692s to LocalClient.Create
	I0603 11:00:00.159264   25542 start.go:167] duration metric: took 26.433791309s to libmachine.API.Create "ha-683480"
	I0603 11:00:00.159275   25542 start.go:293] postStartSetup for "ha-683480-m03" (driver="kvm2")
	I0603 11:00:00.159288   25542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:00:00.159309   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.159544   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:00:00.159573   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.161457   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.161799   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.161827   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.161923   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.162096   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.162219   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.162362   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.241253   25542 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:00:00.245306   25542 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:00:00.245333   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:00:00.245408   25542 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:00:00.245513   25542 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:00:00.245528   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:00:00.245610   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:00:00.254388   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:00:00.278002   25542 start.go:296] duration metric: took 118.713832ms for postStartSetup
	I0603 11:00:00.278046   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetConfigRaw
	I0603 11:00:00.278576   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:00.281105   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.281438   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.281475   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.281700   25542 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:00:00.281881   25542 start.go:128] duration metric: took 26.574409175s to createHost
	I0603 11:00:00.281903   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.284180   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.284481   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.284502   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.284649   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.284807   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.284967   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.285136   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.285287   25542 main.go:141] libmachine: Using SSH client type: native
	I0603 11:00:00.285449   25542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0603 11:00:00.285459   25542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:00:00.387867   25542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412400.367020560
	
	I0603 11:00:00.387894   25542 fix.go:216] guest clock: 1717412400.367020560
	I0603 11:00:00.387901   25542 fix.go:229] Guest: 2024-06-03 11:00:00.36702056 +0000 UTC Remote: 2024-06-03 11:00:00.281892535 +0000 UTC m=+225.848531606 (delta=85.128025ms)
	I0603 11:00:00.387917   25542 fix.go:200] guest clock delta is within tolerance: 85.128025ms
	I0603 11:00:00.387923   25542 start.go:83] releasing machines lock for "ha-683480-m03", held for 26.680546435s
	I0603 11:00:00.387947   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.388257   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:00.390864   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.391267   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.391302   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.393497   25542 out.go:177] * Found network options:
	I0603 11:00:00.394769   25542 out.go:177]   - NO_PROXY=192.168.39.116,192.168.39.127
	W0603 11:00:00.395992   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 11:00:00.396013   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 11:00:00.396024   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396473   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396641   25542 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:00:00.396727   25542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:00:00.396773   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	W0603 11:00:00.396844   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 11:00:00.396874   25542 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 11:00:00.396938   25542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:00:00.396970   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:00:00.399626   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.399862   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400050   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.400103   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400203   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.400283   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:00.400317   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:00.400404   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.400488   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:00:00.400566   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.400654   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:00:00.400720   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.400798   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:00:00.400930   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:00:00.634050   25542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:00:00.640544   25542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:00:00.640594   25542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:00:00.660214   25542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 11:00:00.660234   25542 start.go:494] detecting cgroup driver to use...
	I0603 11:00:00.660291   25542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:00:00.679677   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:00:00.694096   25542 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:00:00.694140   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:00:00.708912   25542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:00:00.723483   25542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:00:00.853589   25542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:00:01.030892   25542 docker.go:233] disabling docker service ...
	I0603 11:00:01.030950   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:00:01.048354   25542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:00:01.063783   25542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:00:01.201067   25542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:00:01.331372   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:00:01.346784   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:00:01.367080   25542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:00:01.367154   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.379422   25542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:00:01.379477   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.390936   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.402407   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.415123   25542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:00:01.426922   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.438527   25542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.457041   25542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:00:01.467851   25542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:00:01.478276   25542 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 11:00:01.478340   25542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 11:00:01.491724   25542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:00:01.503268   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:01.627729   25542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:00:01.776425   25542 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:00:01.776507   25542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:00:01.781948   25542 start.go:562] Will wait 60s for crictl version
	I0603 11:00:01.782020   25542 ssh_runner.go:195] Run: which crictl
	I0603 11:00:01.786363   25542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:00:01.833252   25542 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:00:01.833321   25542 ssh_runner.go:195] Run: crio --version
	I0603 11:00:01.865004   25542 ssh_runner.go:195] Run: crio --version
	I0603 11:00:01.896736   25542 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:00:01.898152   25542 out.go:177]   - env NO_PROXY=192.168.39.116
	I0603 11:00:01.899339   25542 out.go:177]   - env NO_PROXY=192.168.39.116,192.168.39.127
	I0603 11:00:01.900492   25542 main.go:141] libmachine: (ha-683480-m03) Calling .GetIP
	I0603 11:00:01.903408   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:01.903798   25542 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:00:01.903825   25542 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:00:01.904054   25542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:00:01.908582   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:00:01.922194   25542 mustload.go:65] Loading cluster: ha-683480
	I0603 11:00:01.922447   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:00:01.922753   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:01.922801   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:01.938001   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0603 11:00:01.938500   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:01.939076   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:01.939106   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:01.939464   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:01.939695   25542 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:00:01.941861   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:00:01.942174   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:01.942211   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:01.956848   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0603 11:00:01.957291   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:01.957789   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:01.957813   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:01.958115   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:01.958351   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:00:01.958509   25542 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.131
	I0603 11:00:01.958522   25542 certs.go:194] generating shared ca certs ...
	I0603 11:00:01.958539   25542 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:01.958703   25542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:00:01.958756   25542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:00:01.958769   25542 certs.go:256] generating profile certs ...
	I0603 11:00:01.958866   25542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:00:01.958894   25542 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca
	I0603 11:00:01.958911   25542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:00:02.105324   25542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca ...
	I0603 11:00:02.105364   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca: {Name:mk778848a80dabf777f38206c994e23913ed3dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:02.105540   25542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca ...
	I0603 11:00:02.105558   25542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca: {Name:mkb9d2a175e2da763483deea8d48749d46669645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:00:02.105651   25542 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.8d0bf0ca -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:00:02.105801   25542 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.8d0bf0ca -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:00:02.105969   25542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:00:02.105992   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:00:02.106012   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:00:02.106028   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:00:02.106043   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:00:02.106057   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:00:02.106074   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:00:02.106088   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:00:02.106102   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:00:02.106165   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:00:02.106200   25542 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:00:02.106209   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:00:02.106229   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:00:02.106250   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:00:02.106270   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:00:02.106303   25542 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:00:02.106328   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.106342   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.106358   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.106389   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:00:02.109903   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:02.110348   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:00:02.110374   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:02.110585   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:00:02.110835   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:00:02.111090   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:00:02.111261   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:00:02.183533   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 11:00:02.188879   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 11:00:02.203833   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 11:00:02.208388   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 11:00:02.222730   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 11:00:02.227497   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 11:00:02.239251   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 11:00:02.244412   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 11:00:02.255579   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 11:00:02.260003   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 11:00:02.272635   25542 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 11:00:02.277990   25542 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 11:00:02.290408   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:00:02.318961   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:00:02.346126   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:00:02.372570   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:00:02.401050   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 11:00:02.429414   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:00:02.456299   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:00:02.482602   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:00:02.510741   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:00:02.537472   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:00:02.564419   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:00:02.591284   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 11:00:02.610135   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 11:00:02.629305   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 11:00:02.647719   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 11:00:02.665725   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 11:00:02.685001   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 11:00:02.704175   25542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 11:00:02.722580   25542 ssh_runner.go:195] Run: openssl version
	I0603 11:00:02.729329   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:00:02.742109   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.747399   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.747472   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:00:02.754031   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:00:02.767182   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:00:02.781295   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.787458   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.787525   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:00:02.794411   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:00:02.809009   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:00:02.822291   25542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.827800   25542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.827869   25542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:00:02.835606   25542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:00:02.849870   25542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:00:02.854793   25542 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 11:00:02.854844   25542 kubeadm.go:928] updating node {m03 192.168.39.131 8443 v1.30.1 crio true true} ...
	I0603 11:00:02.854935   25542 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:00:02.854971   25542 kube-vip.go:115] generating kube-vip config ...
	I0603 11:00:02.855009   25542 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:00:02.876621   25542 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:00:02.876697   25542 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:00:02.876750   25542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:00:02.889751   25542 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 11:00:02.889803   25542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 11:00:02.902056   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 11:00:02.902087   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 11:00:02.902139   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 11:00:02.902157   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 11:00:02.902166   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 11:00:02.902063   25542 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 11:00:02.902235   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 11:00:02.902238   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:00:02.919491   25542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 11:00:02.919560   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 11:00:02.919602   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 11:00:02.919644   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 11:00:02.919605   25542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 11:00:02.919674   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 11:00:02.942527   25542 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 11:00:02.942564   25542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 11:00:03.978032   25542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 11:00:03.988337   25542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 11:00:04.008747   25542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:00:04.028226   25542 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:00:04.047517   25542 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:00:04.051996   25542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:00:04.067432   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:04.198882   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:00:04.217113   25542 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:00:04.217581   25542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:00:04.217640   25542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:00:04.234040   25542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0603 11:00:04.234511   25542 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:00:04.235158   25542 main.go:141] libmachine: Using API Version  1
	I0603 11:00:04.235188   25542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:00:04.235582   25542 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:00:04.235796   25542 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:00:04.235958   25542 start.go:316] joinCluster: &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:00:04.236121   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 11:00:04.236140   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:00:04.239417   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:04.239878   25542 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:00:04.239905   25542 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:00:04.240115   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:00:04.240323   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:00:04.240468   25542 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:00:04.240700   25542 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:00:04.395209   25542 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:00:04.395250   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9gjcl5.cq7m0hgvprwevy8u --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m03 --control-plane --apiserver-advertise-address=192.168.39.131 --apiserver-bind-port=8443"
	I0603 11:00:28.714875   25542 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9gjcl5.cq7m0hgvprwevy8u --discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683480-m03 --control-plane --apiserver-advertise-address=192.168.39.131 --apiserver-bind-port=8443": (24.319601388s)
	I0603 11:00:28.714909   25542 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 11:00:29.352061   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683480-m03 minikube.k8s.io/updated_at=2024_06_03T11_00_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-683480 minikube.k8s.io/primary=false
	I0603 11:00:29.480343   25542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683480-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 11:00:29.587008   25542 start.go:318] duration metric: took 25.351046222s to joinCluster
	I0603 11:00:29.587111   25542 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:00:29.588268   25542 out.go:177] * Verifying Kubernetes components...
	I0603 11:00:29.587489   25542 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:00:29.589431   25542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:00:29.834362   25542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:00:29.880657   25542 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:00:29.880998   25542 kapi.go:59] client config for ha-683480: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 11:00:29.881080   25542 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.116:8443
	I0603 11:00:29.881335   25542 node_ready.go:35] waiting up to 6m0s for node "ha-683480-m03" to be "Ready" ...
	I0603 11:00:29.881424   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:29.881434   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:29.881446   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:29.881454   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:29.885455   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:30.382440   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:30.382464   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:30.382476   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:30.382482   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:30.385728   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:30.881629   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:30.881649   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:30.881664   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:30.881669   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:30.885779   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:31.382390   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:31.382414   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:31.382423   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:31.382430   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:31.385942   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:31.882160   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:31.882183   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:31.882191   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:31.882195   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:31.885181   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:31.886013   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:32.381540   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:32.381562   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:32.381570   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:32.381574   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:32.384833   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:32.881737   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:32.881811   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:32.881826   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:32.881831   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:32.885145   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.382335   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:33.382356   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:33.382363   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:33.382369   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:33.385480   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.882454   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:33.882488   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:33.882500   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:33.882507   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:33.886449   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:33.887171   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:34.381993   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:34.382022   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:34.382034   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:34.382041   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:34.385106   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:34.882403   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:34.882426   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:34.882433   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:34.882438   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:34.886934   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:35.382506   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:35.382535   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:35.382544   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:35.382548   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:35.385963   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:35.881616   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:35.881637   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:35.881645   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:35.881650   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:35.884867   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.381572   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:36.381595   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.381602   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.381607   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.384871   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.385932   25542 node_ready.go:53] node "ha-683480-m03" has status "Ready":"False"
	I0603 11:00:36.882281   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:36.882304   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.882314   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.882319   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.885367   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:36.886083   25542 node_ready.go:49] node "ha-683480-m03" has status "Ready":"True"
	I0603 11:00:36.886108   25542 node_ready.go:38] duration metric: took 7.004756506s for node "ha-683480-m03" to be "Ready" ...
	I0603 11:00:36.886120   25542 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:00:36.886192   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:36.886204   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.886211   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.886216   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.892662   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 11:00:36.899322   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.899389   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8tqf9
	I0603 11:00:36.899398   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.899405   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.899410   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.901816   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.902531   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.902545   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.902551   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.902554   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.905221   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.906202   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.906225   25542 pod_ready.go:81] duration metric: took 6.88162ms for pod "coredns-7db6d8ff4d-8tqf9" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.906257   25542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.906339   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nff86
	I0603 11:00:36.906349   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.906359   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.906367   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.916283   25542 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 11:00:36.917407   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.917426   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.917453   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.917459   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.921564   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:36.922227   25542 pod_ready.go:92] pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.922247   25542 pod_ready.go:81] duration metric: took 15.976604ms for pod "coredns-7db6d8ff4d-nff86" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.922260   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.922331   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480
	I0603 11:00:36.922342   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.922351   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.922360   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.924622   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.924986   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:36.924998   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.925005   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.925009   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.927602   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.928048   25542 pod_ready.go:92] pod "etcd-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.928062   25542 pod_ready.go:81] duration metric: took 5.791678ms for pod "etcd-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.928071   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.928113   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m02
	I0603 11:00:36.928120   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.928127   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.928131   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.930214   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.930749   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:36.930762   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:36.930772   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:36.930776   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:36.933236   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:36.933907   25542 pod_ready.go:92] pod "etcd-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:36.933926   25542 pod_ready.go:81] duration metric: took 5.847458ms for pod "etcd-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:36.933937   25542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:37.082647   25542 request.go:629] Waited for 148.638529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.082736   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.082747   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.082757   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.082768   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.086221   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.283114   25542 request.go:629] Waited for 196.19485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.283249   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.283274   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.283300   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.283320   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.286391   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.482631   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.482660   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.482670   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.482674   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.486452   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.682342   25542 request.go:629] Waited for 195.29431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.682402   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:37.682409   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.682419   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.682429   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.685659   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:37.934462   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:37.934504   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:37.934512   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:37.934516   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:37.937491   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.082678   25542 request.go:629] Waited for 144.315129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.082742   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.082749   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.082766   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.082780   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.086484   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.434304   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:38.434325   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.434332   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.434338   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.437395   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.482863   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.482882   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.482891   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.482894   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.486027   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:38.934180   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:38.934200   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.934208   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.934214   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.936951   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.937790   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:38.937807   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:38.937814   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:38.937819   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:38.940485   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:38.941085   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:39.434821   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:39.434843   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.434851   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.434855   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.437945   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:39.438609   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:39.438626   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.438634   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.438638   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.441343   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:39.934932   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:39.934959   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.934971   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.934977   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.938788   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:39.939577   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:39.939597   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:39.939607   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:39.939615   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:39.942619   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.434548   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:40.434569   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.434577   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.434581   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.439028   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:40.440125   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:40.440139   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.440147   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.440153   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.443208   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:40.934180   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:40.934207   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.934218   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.934222   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.937101   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.937978   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:40.937994   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:40.938001   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:40.938005   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:40.940621   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:40.941206   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:41.434528   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:41.434550   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.434556   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.434559   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.437652   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:41.438502   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:41.438517   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.438529   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.438534   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.441369   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:41.934309   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:41.934342   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.934352   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.934359   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.937851   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:41.938455   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:41.938469   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:41.938477   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:41.938480   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:41.941001   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:42.435155   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:42.435176   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.435183   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.435189   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.438753   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.439698   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:42.439717   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.439728   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.439733   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.442809   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.934832   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:42.934857   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.934865   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.934870   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.938157   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.938799   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:42.938815   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:42.938822   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:42.938826   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:42.941943   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:42.942703   25542 pod_ready.go:102] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 11:00:43.435052   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:43.435085   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.435094   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.435097   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.438623   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.439364   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:43.439382   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.439390   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.439395   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.442042   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.934720   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683480-m03
	I0603 11:00:43.934746   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.934758   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.934764   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.938369   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.939300   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:43.939321   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.939332   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.939336   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.942327   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.942964   25542 pod_ready.go:92] pod "etcd-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.942987   25542 pod_ready.go:81] duration metric: took 7.009042425s for pod "etcd-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.943008   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.943098   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480
	I0603 11:00:43.943116   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.943125   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.943134   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.946175   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.946963   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:43.946980   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.946991   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.946998   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.949616   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.950144   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.950164   25542 pod_ready.go:81] duration metric: took 7.145143ms for pod "kube-apiserver-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.950177   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.950251   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m02
	I0603 11:00:43.950263   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.950272   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.950278   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.953199   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:43.953878   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:43.953891   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.953900   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.953903   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.957194   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:43.957884   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:43.957904   25542 pod_ready.go:81] duration metric: took 7.719828ms for pod "kube-apiserver-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.957913   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:43.957959   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480-m03
	I0603 11:00:43.957964   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:43.957970   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:43.957977   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:43.960651   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:44.082513   25542 request.go:629] Waited for 121.256824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:44.082568   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:44.082573   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.082581   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.082587   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.087201   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:44.087735   25542 pod_ready.go:92] pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.087751   25542 pod_ready.go:81] duration metric: took 129.833053ms for pod "kube-apiserver-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.087762   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.283295   25542 request.go:629] Waited for 195.455954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 11:00:44.283359   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480
	I0603 11:00:44.283366   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.283374   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.283382   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.286807   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:44.482312   25542 request.go:629] Waited for 194.668894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:44.482357   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:44.482361   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.482367   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.482370   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.485359   25542 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 11:00:44.485909   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.485925   25542 pod_ready.go:81] duration metric: took 398.155773ms for pod "kube-controller-manager-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.485934   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.682553   25542 request.go:629] Waited for 196.533881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 11:00:44.682607   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m02
	I0603 11:00:44.682611   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.682619   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.682626   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.686715   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:44.882437   25542 request.go:629] Waited for 194.278267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:44.882513   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:44.882518   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:44.882525   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:44.882530   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:44.885825   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:44.886595   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:44.886621   25542 pod_ready.go:81] duration metric: took 400.67823ms for pod "kube-controller-manager-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:44.886635   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.082728   25542 request.go:629] Waited for 196.028573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m03
	I0603 11:00:45.082799   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683480-m03
	I0603 11:00:45.082805   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.082812   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.082817   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.087060   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:45.282609   25542 request.go:629] Waited for 194.373002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:45.282696   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:45.282707   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.282719   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.282730   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.286486   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.287062   25542 pod_ready.go:92] pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:45.287081   25542 pod_ready.go:81] duration metric: took 400.439226ms for pod "kube-controller-manager-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.287095   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.483111   25542 request.go:629] Waited for 195.940216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 11:00:45.483193   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d9w5
	I0603 11:00:45.483200   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.483211   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.483215   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.486861   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.683148   25542 request.go:629] Waited for 195.324662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:45.683212   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:45.683219   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.683230   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.683244   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.686898   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:45.687647   25542 pod_ready.go:92] pod "kube-proxy-4d9w5" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:45.687668   25542 pod_ready.go:81] duration metric: took 400.565714ms for pod "kube-proxy-4d9w5" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.687677   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:45.882795   25542 request.go:629] Waited for 195.058548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 11:00:45.882855   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2xfn
	I0603 11:00:45.882873   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:45.882883   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:45.882887   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:45.885950   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.083126   25542 request.go:629] Waited for 196.324598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:46.083193   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:46.083199   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.083204   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.083208   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.086502   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.087224   25542 pod_ready.go:92] pod "kube-proxy-q2xfn" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.087245   25542 pod_ready.go:81] duration metric: took 399.561498ms for pod "kube-proxy-q2xfn" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.087258   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-txnhc" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.283174   25542 request.go:629] Waited for 195.853901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txnhc
	I0603 11:00:46.283255   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txnhc
	I0603 11:00:46.283263   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.283271   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.283274   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.286562   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.482604   25542 request.go:629] Waited for 195.36119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:46.482661   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:46.482666   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.482673   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.482681   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.486023   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.486658   25542 pod_ready.go:92] pod "kube-proxy-txnhc" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.486673   25542 pod_ready.go:81] duration metric: took 399.409157ms for pod "kube-proxy-txnhc" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.486683   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.682914   25542 request.go:629] Waited for 196.156761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 11:00:46.682965   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480
	I0603 11:00:46.682970   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.682977   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.682981   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.686588   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.882765   25542 request.go:629] Waited for 195.375308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:46.882881   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480
	I0603 11:00:46.882895   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:46.882903   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:46.882906   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:46.886303   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:46.886864   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:46.886886   25542 pod_ready.go:81] duration metric: took 400.195281ms for pod "kube-scheduler-ha-683480" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:46.886902   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.082609   25542 request.go:629] Waited for 195.622546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 11:00:47.082674   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m02
	I0603 11:00:47.082680   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.082687   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.082690   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.086184   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.282485   25542 request.go:629] Waited for 195.325073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:47.282554   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02
	I0603 11:00:47.282560   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.282568   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.282572   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.286225   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.286653   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:47.286669   25542 pod_ready.go:81] duration metric: took 399.759758ms for pod "kube-scheduler-ha-683480-m02" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.286679   25542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.482793   25542 request.go:629] Waited for 196.036972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m03
	I0603 11:00:47.482847   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683480-m03
	I0603 11:00:47.482852   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.482864   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.482870   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.486451   25542 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 11:00:47.682325   25542 request.go:629] Waited for 195.298985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:47.682414   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/ha-683480-m03
	I0603 11:00:47.682420   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.682427   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.682432   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.686692   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:47.687709   25542 pod_ready.go:92] pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 11:00:47.687731   25542 pod_ready.go:81] duration metric: took 401.045776ms for pod "kube-scheduler-ha-683480-m03" in "kube-system" namespace to be "Ready" ...
	I0603 11:00:47.687742   25542 pod_ready.go:38] duration metric: took 10.801605649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:00:47.687769   25542 api_server.go:52] waiting for apiserver process to appear ...
	I0603 11:00:47.687830   25542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:00:47.706806   25542 api_server.go:72] duration metric: took 18.119656039s to wait for apiserver process to appear ...
	I0603 11:00:47.706833   25542 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:00:47.706854   25542 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0603 11:00:47.714253   25542 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0603 11:00:47.714339   25542 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0603 11:00:47.714351   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.714362   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.714370   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.715321   25542 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 11:00:47.715374   25542 api_server.go:141] control plane version: v1.30.1
	I0603 11:00:47.715387   25542 api_server.go:131] duration metric: took 8.548831ms to wait for apiserver health ...
	I0603 11:00:47.715397   25542 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:00:47.882790   25542 request.go:629] Waited for 167.329748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:47.882876   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:47.882887   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:47.882897   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:47.882904   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:47.891445   25542 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 11:00:47.897563   25542 system_pods.go:59] 24 kube-system pods found
	I0603 11:00:47.897602   25542 system_pods.go:61] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 11:00:47.897607   25542 system_pods.go:61] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 11:00:47.897610   25542 system_pods.go:61] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 11:00:47.897614   25542 system_pods.go:61] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 11:00:47.897616   25542 system_pods.go:61] "etcd-ha-683480-m03" [b508988f-4dad-4a28-89b7-b6c38e27626f] Running
	I0603 11:00:47.897619   25542 system_pods.go:61] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 11:00:47.897622   25542 system_pods.go:61] "kindnet-zsfhr" [ecb7fc1b-cc53-4b58-8e55-9269608f217f] Running
	I0603 11:00:47.897625   25542 system_pods.go:61] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 11:00:47.897627   25542 system_pods.go:61] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 11:00:47.897630   25542 system_pods.go:61] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 11:00:47.897633   25542 system_pods.go:61] "kube-apiserver-ha-683480-m03" [063e6cb5-7f5f-4fa0-a54d-dff4303574da] Running
	I0603 11:00:47.897636   25542 system_pods.go:61] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 11:00:47.897639   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 11:00:47.897643   25542 system_pods.go:61] "kube-controller-manager-ha-683480-m03" [6819bdcb-5dd4-43c8-a9c7-d6970609be77] Running
	I0603 11:00:47.897646   25542 system_pods.go:61] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 11:00:47.897649   25542 system_pods.go:61] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 11:00:47.897651   25542 system_pods.go:61] "kube-proxy-txnhc" [f8fbdd89-d160-4342-94ca-9e049b0e96a8] Running
	I0603 11:00:47.897654   25542 system_pods.go:61] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 11:00:47.897658   25542 system_pods.go:61] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 11:00:47.897660   25542 system_pods.go:61] "kube-scheduler-ha-683480-m03" [be6a6382-a11b-425f-a0bf-551d1254d60a] Running
	I0603 11:00:47.897663   25542 system_pods.go:61] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 11:00:47.897666   25542 system_pods.go:61] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 11:00:47.897669   25542 system_pods.go:61] "kube-vip-ha-683480-m03" [b47cab7c-1c30-4828-a351-699fe4935533] Running
	I0603 11:00:47.897680   25542 system_pods.go:61] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 11:00:47.897685   25542 system_pods.go:74] duration metric: took 182.283499ms to wait for pod list to return data ...
	I0603 11:00:47.897695   25542 default_sa.go:34] waiting for default service account to be created ...
	I0603 11:00:48.083136   25542 request.go:629] Waited for 185.349975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 11:00:48.083200   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0603 11:00:48.083208   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.083218   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.083226   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.088385   25542 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 11:00:48.088529   25542 default_sa.go:45] found service account: "default"
	I0603 11:00:48.088547   25542 default_sa.go:55] duration metric: took 190.845833ms for default service account to be created ...
	I0603 11:00:48.088555   25542 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 11:00:48.282975   25542 request.go:629] Waited for 194.346284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:48.283061   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0603 11:00:48.283070   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.283089   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.283098   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.289555   25542 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 11:00:48.297090   25542 system_pods.go:86] 24 kube-system pods found
	I0603 11:00:48.297125   25542 system_pods.go:89] "coredns-7db6d8ff4d-8tqf9" [8eab910a-98ed-43db-ac16-d53beb6b7ee4] Running
	I0603 11:00:48.297133   25542 system_pods.go:89] "coredns-7db6d8ff4d-nff86" [02320e91-17ab-4120-b8b9-dcc08234f180] Running
	I0603 11:00:48.297139   25542 system_pods.go:89] "etcd-ha-683480" [b0a866b1-e56e-4c99-90d1-b96b08dc814f] Running
	I0603 11:00:48.297145   25542 system_pods.go:89] "etcd-ha-683480-m02" [ae0c631b-f1b7-4f97-a112-82115e2e3a26] Running
	I0603 11:00:48.297151   25542 system_pods.go:89] "etcd-ha-683480-m03" [b508988f-4dad-4a28-89b7-b6c38e27626f] Running
	I0603 11:00:48.297157   25542 system_pods.go:89] "kindnet-t6fxj" [a1edfc5d-477d-40ed-8702-4916d1e9fcb1] Running
	I0603 11:00:48.297163   25542 system_pods.go:89] "kindnet-zsfhr" [ecb7fc1b-cc53-4b58-8e55-9269608f217f] Running
	I0603 11:00:48.297170   25542 system_pods.go:89] "kindnet-zxhbp" [320e315b-e189-4358-9e56-a4be7d944fae] Running
	I0603 11:00:48.297180   25542 system_pods.go:89] "kube-apiserver-ha-683480" [383ca38e-6dea-45d2-8874-f8f7478b889d] Running
	I0603 11:00:48.297189   25542 system_pods.go:89] "kube-apiserver-ha-683480-m02" [b1fadbf7-5046-4762-928e-d0a86b2c333a] Running
	I0603 11:00:48.297199   25542 system_pods.go:89] "kube-apiserver-ha-683480-m03" [063e6cb5-7f5f-4fa0-a54d-dff4303574da] Running
	I0603 11:00:48.297210   25542 system_pods.go:89] "kube-controller-manager-ha-683480" [3ba095b7-0e4d-41b9-af2d-12d4ce4ae004] Running
	I0603 11:00:48.297220   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m02" [fe54bb1f-7320-40dd-a8a9-f7d1c5d793fe] Running
	I0603 11:00:48.297229   25542 system_pods.go:89] "kube-controller-manager-ha-683480-m03" [6819bdcb-5dd4-43c8-a9c7-d6970609be77] Running
	I0603 11:00:48.297236   25542 system_pods.go:89] "kube-proxy-4d9w5" [708e060d-115a-4b74-bc66-138d62796b50] Running
	I0603 11:00:48.297243   25542 system_pods.go:89] "kube-proxy-q2xfn" [af8c691a-3316-4e6d-8feb-b306d6d5d2f1] Running
	I0603 11:00:48.297253   25542 system_pods.go:89] "kube-proxy-txnhc" [f8fbdd89-d160-4342-94ca-9e049b0e96a8] Running
	I0603 11:00:48.297262   25542 system_pods.go:89] "kube-scheduler-ha-683480" [c57edb18-cdff-4548-acc4-1abbbd906fc5] Running
	I0603 11:00:48.297272   25542 system_pods.go:89] "kube-scheduler-ha-683480-m02" [ce81b254-4edc-425a-8489-14c71f56d7de] Running
	I0603 11:00:48.297279   25542 system_pods.go:89] "kube-scheduler-ha-683480-m03" [be6a6382-a11b-425f-a0bf-551d1254d60a] Running
	I0603 11:00:48.297288   25542 system_pods.go:89] "kube-vip-ha-683480" [aa6a05c5-446e-4179-be45-0f8d33631c89] Running
	I0603 11:00:48.297294   25542 system_pods.go:89] "kube-vip-ha-683480-m02" [5679c930-02ab-4784-8bf1-7e477719a5a6] Running
	I0603 11:00:48.297303   25542 system_pods.go:89] "kube-vip-ha-683480-m03" [b47cab7c-1c30-4828-a351-699fe4935533] Running
	I0603 11:00:48.297310   25542 system_pods.go:89] "storage-provisioner" [a410a98d-73a7-434b-88ce-575c300b2807] Running
	I0603 11:00:48.297321   25542 system_pods.go:126] duration metric: took 208.759907ms to wait for k8s-apps to be running ...
	I0603 11:00:48.297335   25542 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 11:00:48.297388   25542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:00:48.313772   25542 system_svc.go:56] duration metric: took 16.427175ms WaitForService to wait for kubelet
	I0603 11:00:48.313806   25542 kubeadm.go:576] duration metric: took 18.72665881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:00:48.313838   25542 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:00:48.483252   25542 request.go:629] Waited for 169.35007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0603 11:00:48.483313   25542 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0603 11:00:48.483320   25542 round_trippers.go:469] Request Headers:
	I0603 11:00:48.483329   25542 round_trippers.go:473]     Accept: application/json, */*
	I0603 11:00:48.483335   25542 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 11:00:48.487622   25542 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 11:00:48.488815   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488836   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488854   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488858   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488861   25542 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:00:48.488864   25542 node_conditions.go:123] node cpu capacity is 2
	I0603 11:00:48.488868   25542 node_conditions.go:105] duration metric: took 175.026386ms to run NodePressure ...
	I0603 11:00:48.488878   25542 start.go:240] waiting for startup goroutines ...
	I0603 11:00:48.488898   25542 start.go:254] writing updated cluster config ...
	I0603 11:00:48.489166   25542 ssh_runner.go:195] Run: rm -f paused
	I0603 11:00:48.541365   25542 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 11:00:48.543638   25542 out.go:177] * Done! kubectl is now configured to use "ha-683480" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.487728134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412721487709350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4444f64d-528c-4f21-ac5d-7d211560b132 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.488342630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18d5a22a-393e-4181-9471-01bafd33e8c0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.488396056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18d5a22a-393e-4181-9471-01bafd33e8c0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.488628078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18d5a22a-393e-4181-9471-01bafd33e8c0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.527164063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac070dc8-2916-46f9-b608-ade40d376d06 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.527257927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac070dc8-2916-46f9-b608-ade40d376d06 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.528510283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=788b46b9-ae51-458e-98d1-a70c84df4565 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.529298035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412721529272989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=788b46b9-ae51-458e-98d1-a70c84df4565 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.530043462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3615563-0dbe-4f75-b3fc-2b6478a1b064 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.530097863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3615563-0dbe-4f75-b3fc-2b6478a1b064 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.530363534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3615563-0dbe-4f75-b3fc-2b6478a1b064 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.572864654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d02b8fc1-79cb-40c6-b6c7-47da38729d44 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.572963011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d02b8fc1-79cb-40c6-b6c7-47da38729d44 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.574577953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3d286b9-735a-4a33-b266-b4ae5b7b70fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.575132940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412721575106128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3d286b9-735a-4a33-b266-b4ae5b7b70fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.575562334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36f7a057-e8e8-4e9f-8a09-ab0e59106117 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.575649021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36f7a057-e8e8-4e9f-8a09-ab0e59106117 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.575880084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36f7a057-e8e8-4e9f-8a09-ab0e59106117 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.616162575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3f657d6-96b5-4666-b76e-e8ca07766cdb name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.616244967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3f657d6-96b5-4666-b76e-e8ca07766cdb name=/runtime.v1.RuntimeService/Version
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.617501290Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc857f0e-1241-4607-8b21-bd000bdc46bf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.617931464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717412721617911226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc857f0e-1241-4607-8b21-bd000bdc46bf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.618652496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5cf07188-aca7-4e60-9946-ab6a5a29927d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.618701696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5cf07188-aca7-4e60-9946-ab6a5a29927d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:05:21 ha-683480 crio[677]: time="2024-06-03 11:05:21.618934295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412452793821524,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60,PodSandboxId:b1b8dc93262494d7c16fb61879ea3220c5decc3e129bda003d03246037cb82a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717412239593591044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239551891220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412239525725221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17a
b-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b,PodSandboxId:e2f8a60370d3fd1695a709fe26efc9665a764a8ede97163357b9c15c4cb5fb32,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CON
TAINER_RUNNING,CreatedAt:1717412238014847308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412233
855123394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda,PodSandboxId:8990a20edbd369db84d6c96fcb753c487186298a6ec2e2e0c0fe3ce761ef55b8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:171741221681
6707000,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfc66acc1754150cf4e24f38d1b191d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412213949208807,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59,PodSandboxId:a55199d2713b2227114c24c6ea32028395b589674496612fac0e499dc8774213,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412213989413462,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412213926343118,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79,PodSandboxId:117e05a9216ba0cb39b45fc899065d9fbba904cb50146ebbd3a11d129c956829,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412213885854963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5cf07188-aca7-4e60-9946-ab6a5a29927d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	348419ceaffc3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   d32d79da82b93       busybox-fc5497c4f-mvpcm
	b5e9b65b02107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   b1b8dc9326249       storage-provisioner
	fdbecc258023e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   62bef471ea4a4       coredns-7db6d8ff4d-8tqf9
	aa5e3aca86502       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   41da25dac8c48       coredns-7db6d8ff4d-nff86
	995fa288cd916       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    8 minutes ago       Running             kindnet-cni               0                   e2f8a60370d3f       kindnet-zxhbp
	bcb102231e3a6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      8 minutes ago       Running             kube-proxy                0                   6812552c2a4ab       kube-proxy-4d9w5
	2542929b8eaa1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   8990a20edbd36       kube-vip-ha-683480
	3e27550ee88e8       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      8 minutes ago       Running             kube-controller-manager   0                   a55199d2713b2       kube-controller-manager-ha-683480
	c282307764128       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      8 minutes ago       Running             kube-scheduler            0                   860a510241592       kube-scheduler-ha-683480
	09fff5459f24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   86b1d4bcd541d       etcd-ha-683480
	200682c1dc43f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      8 minutes ago       Running             kube-apiserver            0                   117e05a9216ba       kube-apiserver-ha-683480
	
	
	==> coredns [aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8] <==
	[INFO] 10.244.0.4:50785 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001418789s
	[INFO] 10.244.2.2:45411 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001774399s
	[INFO] 10.244.1.2:53834 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003893614s
	[INFO] 10.244.1.2:48466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159838s
	[INFO] 10.244.1.2:57388 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158737s
	[INFO] 10.244.1.2:59258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009417s
	[INFO] 10.244.0.4:59067 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001995491s
	[INFO] 10.244.0.4:33658 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077694s
	[INFO] 10.244.2.2:56134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146189s
	[INFO] 10.244.2.2:42897 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001874015s
	[INFO] 10.244.2.2:49555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079926s
	[INFO] 10.244.1.2:49977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098794s
	[INFO] 10.244.1.2:55522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070995s
	[INFO] 10.244.1.2:47166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064061s
	[INFO] 10.244.0.4:52772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107779s
	[INFO] 10.244.0.4:34695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110706s
	[INFO] 10.244.2.2:47248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010537s
	[INFO] 10.244.1.2:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175618s
	[INFO] 10.244.1.2:56731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211211s
	[INFO] 10.244.1.2:47156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137189s
	[INFO] 10.244.1.2:57441 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161046s
	[INFO] 10.244.0.4:45937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064288s
	[INFO] 10.244.0.4:50125 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003887s
	[INFO] 10.244.2.2:38937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134308s
	[INFO] 10.244.2.2:34039 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085147s
	
	
	==> coredns [fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668] <==
	[INFO] 10.244.1.2:51172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127457s
	[INFO] 10.244.1.2:44058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217914s
	[INFO] 10.244.1.2:60397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013328418s
	[INFO] 10.244.1.2:34848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138348s
	[INFO] 10.244.0.4:53254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147619s
	[INFO] 10.244.0.4:37575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103362s
	[INFO] 10.244.0.4:54948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181862s
	[INFO] 10.244.0.4:39944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365258s
	[INFO] 10.244.0.4:55239 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017828s
	[INFO] 10.244.0.4:57467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097919s
	[INFO] 10.244.2.2:35971 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096406s
	[INFO] 10.244.2.2:38423 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334812s
	[INFO] 10.244.2.2:42352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153771s
	[INFO] 10.244.2.2:40734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099488s
	[INFO] 10.244.2.2:34598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136946s
	[INFO] 10.244.1.2:54219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087067s
	[INFO] 10.244.0.4:58452 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093948s
	[INFO] 10.244.0.4:35784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061499s
	[INFO] 10.244.2.2:54391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149082s
	[INFO] 10.244.2.2:39850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109311s
	[INFO] 10.244.2.2:39330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101321s
	[INFO] 10.244.0.4:56550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137331s
	[INFO] 10.244.0.4:42317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097716s
	[INFO] 10.244.2.2:34210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106975s
	[INFO] 10.244.2.2:40755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028708s
	
	
	==> describe nodes <==
	Name:               ha-683480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:01:04 +0000   Mon, 03 Jun 2024 10:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-683480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1505c2b59bc4afb8c36148f46c99e6c
	  System UUID:                f1505c2b-59bc-4afb-8c36-148f46c99e6c
	  Boot ID:                    acccd468-078d-403e-a5b4-d10d97594cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mvpcm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 coredns-7db6d8ff4d-8tqf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m8s
	  kube-system                 coredns-7db6d8ff4d-nff86             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m8s
	  kube-system                 etcd-ha-683480                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m21s
	  kube-system                 kindnet-zxhbp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m8s
	  kube-system                 kube-apiserver-ha-683480             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-controller-manager-ha-683480    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-4d9w5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-ha-683480             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-vip-ha-683480                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     8m28s (x7 over 8m28s)  kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m28s (x8 over 8m28s)  kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s (x8 over 8m28s)  kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m21s                  kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s                  kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s                  kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m9s                   node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal  NodeReady                8m3s                   kubelet          Node ha-683480 status is now: NodeReady
	  Normal  RegisteredNode           5m52s                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	
	
	Name:               ha-683480-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:59:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:01:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:01:14 +0000   Mon, 03 Jun 2024 11:02:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-683480-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d1a1fca79484f629cf7b8fc1955281b
	  System UUID:                2d1a1fca-7948-4f62-9cf7-b8fc1955281b
	  Boot ID:                    9fed0fd2-3bb7-4f1f-92e4-0c4854a958bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldtcf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 etcd-ha-683480-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-t6fxj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-683480-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-controller-manager-ha-683480-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-q2xfn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-683480-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-683480-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           5m52s                  node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-683480-m02 status is now: NodeNotReady
	
	
	Name:               ha-683480-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_00_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:00:55 +0000   Mon, 03 Jun 2024 11:00:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    ha-683480-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bb33c5cad548f785d23d226c699411
	  System UUID:                b7bb33c5-cad5-48f7-85d2-3d226c699411
	  Boot ID:                    dafc5e08-866b-431b-bf46-a55811884d2b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ngf6n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 etcd-ha-683480-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-zsfhr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-ha-683480-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-ha-683480-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-txnhc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-ha-683480-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-vip-ha-683480-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-683480-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	
	
	Name:               ha-683480-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:01:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:01:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-683480-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0705544cf414e31abf26e0a013cd6bf
	  System UUID:                d0705544-cf41-4e31-abf2-6e0a013cd6bf
	  Boot ID:                    125ac719-6c97-4e76-9440-99e7f62b9e2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24p87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-2kkf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x2 over 3m58s)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x2 over 3m58s)  kubelet          Node ha-683480-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x2 over 3m58s)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node ha-683480-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun 3 10:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051360] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039810] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.490599] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.327761] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.577151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.363785] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051848] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.189543] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.108878] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.262803] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077728] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +5.011635] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.054415] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.849379] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.148784] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[Jun 3 10:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057593] kauditd_printk_skb: 34 callbacks suppressed
	[Jun 3 10:59] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad] <==
	{"level":"warn","ts":"2024-06-03T11:05:21.901449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.902885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.903508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.905939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.913885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.920752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.924716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.927848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.940246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.947269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.954944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.959817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.964064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.977914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.984339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.994081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:21.997849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.001292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.006748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.008078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.017642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.024395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.092043Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"186d66165cd2cce","rtt":"900.541µs","error":"dial tcp 192.168.39.127:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-03T11:05:22.092566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8b2d6b6d639b2fdb","from":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:05:22.09664Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"186d66165cd2cce","rtt":"10.319884ms","error":"dial tcp 192.168.39.127:2380: connect: no route to host"}
	
	
	==> kernel <==
	 11:05:22 up 9 min,  0 users,  load average: 0.24, 0.21, 0.09
	Linux ha-683480 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b] <==
	I0603 11:04:49.354502       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:04:59.361075       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:04:59.361160       1 main.go:227] handling current node
	I0603 11:04:59.361240       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:04:59.361265       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:04:59.361401       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:04:59.361422       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:04:59.361487       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:04:59.361504       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:05:09.376555       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:05:09.376869       1 main.go:227] handling current node
	I0603 11:05:09.376939       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:05:09.376974       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:05:09.377208       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:05:09.377249       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:05:09.377342       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:05:09.377370       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:05:19.393202       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:05:19.395929       1 main.go:227] handling current node
	I0603 11:05:19.396258       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:05:19.396307       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:05:19.396423       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:05:19.396444       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:05:19.396503       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:05:19.396520       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79] <==
	W0603 10:56:58.845311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116]
	I0603 10:56:58.846085       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 10:56:58.849879       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 10:56:59.045916       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 10:57:00.166965       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 10:57:00.191680       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 10:57:00.212211       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 10:57:12.906632       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0603 10:57:13.254690       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0603 11:00:54.043235       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53504: use of closed network connection
	E0603 11:00:54.243975       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53510: use of closed network connection
	E0603 11:00:54.433209       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53534: use of closed network connection
	E0603 11:00:54.645283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53538: use of closed network connection
	E0603 11:00:54.829106       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53552: use of closed network connection
	E0603 11:00:55.009820       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53574: use of closed network connection
	E0603 11:00:55.193584       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53586: use of closed network connection
	E0603 11:00:55.367398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53604: use of closed network connection
	E0603 11:00:55.553665       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53628: use of closed network connection
	E0603 11:00:55.827887       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53642: use of closed network connection
	E0603 11:00:56.014267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53658: use of closed network connection
	E0603 11:00:56.195730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53666: use of closed network connection
	E0603 11:00:56.391371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53688: use of closed network connection
	E0603 11:00:56.573908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53710: use of closed network connection
	E0603 11:00:56.742677       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49688: use of closed network connection
	W0603 11:02:08.863808       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.131]
	
	
	==> kube-controller-manager [3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59] <==
	I0603 11:00:25.470071       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683480-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:00:27.549975       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m03"
	I0603 11:00:49.547774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.596903ms"
	I0603 11:00:49.666180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.082703ms"
	I0603 11:00:49.837962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.715406ms"
	I0603 11:00:49.892720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.455781ms"
	I0603 11:00:49.892826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.515µs"
	I0603 11:00:50.005424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.472497ms"
	I0603 11:00:50.005505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.77µs"
	I0603 11:00:50.116701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.234µs"
	I0603 11:00:51.839738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.972µs"
	I0603 11:00:53.105142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.466395ms"
	I0603 11:00:53.105291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.046µs"
	I0603 11:00:53.312277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.951093ms"
	I0603 11:00:53.312366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.995µs"
	I0603 11:00:53.591833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.110404ms"
	I0603 11:00:53.591950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.101µs"
	E0603 11:01:24.869229       1 certificate_controller.go:146] Sync csr-l4bzv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-l4bzv": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:01:25.167074       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-683480-m04\" does not exist"
	I0603 11:01:25.183314       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683480-m04" podCIDRs=["10.244.3.0/24"]
	I0603 11:01:27.580496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m04"
	I0603 11:01:35.199652       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683480-m04"
	I0603 11:02:37.626039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683480-m04"
	I0603 11:02:37.756078       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.010628ms"
	I0603 11:02:37.756369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="189.403µs"
	
	
	==> kube-proxy [bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f] <==
	I0603 10:57:14.219931       1 server_linux.go:69] "Using iptables proxy"
	I0603 10:57:14.234516       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	I0603 10:57:14.308348       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 10:57:14.308413       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 10:57:14.308429       1 server_linux.go:165] "Using iptables Proxier"
	I0603 10:57:14.321218       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 10:57:14.321484       1 server.go:872] "Version info" version="v1.30.1"
	I0603 10:57:14.322736       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 10:57:14.325109       1 config.go:192] "Starting service config controller"
	I0603 10:57:14.325148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 10:57:14.325187       1 config.go:101] "Starting endpoint slice config controller"
	I0603 10:57:14.325203       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 10:57:14.325717       1 config.go:319] "Starting node config controller"
	I0603 10:57:14.325767       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 10:57:14.425945       1 shared_informer.go:320] Caches are synced for node config
	I0603 10:57:14.426052       1 shared_informer.go:320] Caches are synced for service config
	I0603 10:57:14.426092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556] <==
	W0603 10:56:58.275182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 10:56:58.275297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 10:56:58.355336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 10:56:58.355431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 10:56:58.409399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 10:56:58.409428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 10:56:58.414878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 10:56:58.414916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 10:56:58.424918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 10:56:58.425079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 10:56:58.564204       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 10:56:58.564306       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 10:57:00.796334       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 11:00:49.536517       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mvpcm\": pod busybox-fc5497c4f-mvpcm is already assigned to node \"ha-683480\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mvpcm" node="ha-683480"
	E0603 11:00:49.542818       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fe7a8238-754b-43ce-8080-48e39c548383(default/busybox-fc5497c4f-mvpcm) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mvpcm"
	E0603 11:00:49.543611       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mvpcm\": pod busybox-fc5497c4f-mvpcm is already assigned to node \"ha-683480\"" pod="default/busybox-fc5497c4f-mvpcm"
	I0603 11:00:49.543859       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mvpcm" node="ha-683480"
	E0603 11:01:25.237779       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24p87\": pod kindnet-24p87 is already assigned to node \"ha-683480-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24p87" node="ha-683480-m04"
	E0603 11:01:25.238607       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dee8d19c-7e34-45b9-b5f4-88e8e8cb92e9(kube-system/kindnet-24p87) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24p87"
	E0603 11:01:25.241383       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24p87\": pod kindnet-24p87 is already assigned to node \"ha-683480-m04\"" pod="kube-system/kindnet-24p87"
	I0603 11:01:25.241448       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24p87" node="ha-683480-m04"
	E0603 11:01:25.246543       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6xfsj\": pod kube-proxy-6xfsj is already assigned to node \"ha-683480-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6xfsj" node="ha-683480-m04"
	E0603 11:01:25.250352       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9eaf0689-1d2f-4ffd-b921-c682b1b47fd0(kube-system/kube-proxy-6xfsj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6xfsj"
	E0603 11:01:25.253261       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6xfsj\": pod kube-proxy-6xfsj is already assigned to node \"ha-683480-m04\"" pod="kube-system/kube-proxy-6xfsj"
	I0603 11:01:25.253639       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6xfsj" node="ha-683480-m04"
	
	
	==> kubelet <==
	Jun 03 11:01:00 ha-683480 kubelet[1378]: E0603 11:01:00.112213    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:01:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:01:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:02:00 ha-683480 kubelet[1378]: E0603 11:02:00.116393    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:02:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:02:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:03:00 ha-683480 kubelet[1378]: E0603 11:03:00.112081    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:03:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:03:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:04:00 ha-683480 kubelet[1378]: E0603 11:04:00.112706    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:04:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:04:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:05:00 ha-683480 kubelet[1378]: E0603 11:05:00.113860    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:05:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:05:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:05:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:05:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (62.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-683480 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-683480 -v=7 --alsologtostderr
E0603 11:05:46.899320   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:07:12.037568   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-683480 -v=7 --alsologtostderr: exit status 82 (2m1.961004525s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683480-m04"  ...
	* Stopping node "ha-683480-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:05:23.481949   31593 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:05:23.482056   31593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:23.482067   31593 out.go:304] Setting ErrFile to fd 2...
	I0603 11:05:23.482072   31593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:05:23.482285   31593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:05:23.482554   31593 out.go:298] Setting JSON to false
	I0603 11:05:23.482728   31593 mustload.go:65] Loading cluster: ha-683480
	I0603 11:05:23.483221   31593 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:05:23.483354   31593 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:05:23.483580   31593 mustload.go:65] Loading cluster: ha-683480
	I0603 11:05:23.483780   31593 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:05:23.483820   31593 stop.go:39] StopHost: ha-683480-m04
	I0603 11:05:23.484302   31593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:23.484367   31593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:23.500026   31593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0603 11:05:23.500483   31593 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:23.501052   31593 main.go:141] libmachine: Using API Version  1
	I0603 11:05:23.501080   31593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:23.501441   31593 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:23.504394   31593 out.go:177] * Stopping node "ha-683480-m04"  ...
	I0603 11:05:23.505869   31593 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:05:23.505906   31593 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:05:23.506186   31593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:05:23.506215   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:05:23.509092   31593 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:23.509521   31593 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:05:23.509559   31593 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:05:23.509617   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:05:23.509800   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:05:23.509959   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:05:23.510138   31593 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:05:23.594415   31593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:05:23.648238   31593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:05:23.701814   31593 main.go:141] libmachine: Stopping "ha-683480-m04"...
	I0603 11:05:23.701848   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:05:23.703215   31593 main.go:141] libmachine: (ha-683480-m04) Calling .Stop
	I0603 11:05:23.706562   31593 main.go:141] libmachine: (ha-683480-m04) Waiting for machine to stop 0/120
	I0603 11:05:24.993040   31593 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:05:24.994255   31593 main.go:141] libmachine: Machine "ha-683480-m04" was stopped.
	I0603 11:05:24.994270   31593 stop.go:75] duration metric: took 1.488403351s to stop
	I0603 11:05:24.994287   31593 stop.go:39] StopHost: ha-683480-m03
	I0603 11:05:24.994617   31593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:05:24.994674   31593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:05:25.008810   31593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0603 11:05:25.009155   31593 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:05:25.009631   31593 main.go:141] libmachine: Using API Version  1
	I0603 11:05:25.009662   31593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:05:25.009979   31593 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:05:25.012122   31593 out.go:177] * Stopping node "ha-683480-m03"  ...
	I0603 11:05:25.013320   31593 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:05:25.013342   31593 main.go:141] libmachine: (ha-683480-m03) Calling .DriverName
	I0603 11:05:25.013535   31593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:05:25.013555   31593 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHHostname
	I0603 11:05:25.016168   31593 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:25.016567   31593 main.go:141] libmachine: (ha-683480-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:3e:89", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:59:47 +0000 UTC Type:0 Mac:52:54:00:b4:3e:89 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ha-683480-m03 Clientid:01:52:54:00:b4:3e:89}
	I0603 11:05:25.016600   31593 main.go:141] libmachine: (ha-683480-m03) DBG | domain ha-683480-m03 has defined IP address 192.168.39.131 and MAC address 52:54:00:b4:3e:89 in network mk-ha-683480
	I0603 11:05:25.016679   31593 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHPort
	I0603 11:05:25.016861   31593 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHKeyPath
	I0603 11:05:25.017021   31593 main.go:141] libmachine: (ha-683480-m03) Calling .GetSSHUsername
	I0603 11:05:25.017137   31593 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m03/id_rsa Username:docker}
	I0603 11:05:25.098938   31593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:05:25.152122   31593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:05:25.208796   31593 main.go:141] libmachine: Stopping "ha-683480-m03"...
	I0603 11:05:25.208817   31593 main.go:141] libmachine: (ha-683480-m03) Calling .GetState
	I0603 11:05:25.210206   31593 main.go:141] libmachine: (ha-683480-m03) Calling .Stop
	I0603 11:05:25.213260   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 0/120
	I0603 11:05:26.214616   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 1/120
	I0603 11:05:27.216034   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 2/120
	I0603 11:05:28.217306   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 3/120
	I0603 11:05:29.218645   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 4/120
	I0603 11:05:30.220198   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 5/120
	I0603 11:05:31.221792   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 6/120
	I0603 11:05:32.223382   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 7/120
	I0603 11:05:33.225575   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 8/120
	I0603 11:05:34.227076   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 9/120
	I0603 11:05:35.228501   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 10/120
	I0603 11:05:36.229728   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 11/120
	I0603 11:05:37.231102   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 12/120
	I0603 11:05:38.232559   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 13/120
	I0603 11:05:39.234101   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 14/120
	I0603 11:05:40.235775   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 15/120
	I0603 11:05:41.237099   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 16/120
	I0603 11:05:42.238294   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 17/120
	I0603 11:05:43.239696   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 18/120
	I0603 11:05:44.241216   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 19/120
	I0603 11:05:45.242915   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 20/120
	I0603 11:05:46.244262   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 21/120
	I0603 11:05:47.245556   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 22/120
	I0603 11:05:48.246737   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 23/120
	I0603 11:05:49.247991   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 24/120
	I0603 11:05:50.249987   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 25/120
	I0603 11:05:51.251196   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 26/120
	I0603 11:05:52.252563   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 27/120
	I0603 11:05:53.254316   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 28/120
	I0603 11:05:54.255661   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 29/120
	I0603 11:05:55.257394   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 30/120
	I0603 11:05:56.258949   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 31/120
	I0603 11:05:57.260952   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 32/120
	I0603 11:05:58.262459   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 33/120
	I0603 11:05:59.264077   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 34/120
	I0603 11:06:00.265873   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 35/120
	I0603 11:06:01.267074   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 36/120
	I0603 11:06:02.268348   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 37/120
	I0603 11:06:03.270225   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 38/120
	I0603 11:06:04.271663   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 39/120
	I0603 11:06:05.273082   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 40/120
	I0603 11:06:06.274454   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 41/120
	I0603 11:06:07.275806   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 42/120
	I0603 11:06:08.277105   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 43/120
	I0603 11:06:09.278539   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 44/120
	I0603 11:06:10.280289   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 45/120
	I0603 11:06:11.281599   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 46/120
	I0603 11:06:12.283033   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 47/120
	I0603 11:06:13.284369   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 48/120
	I0603 11:06:14.285691   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 49/120
	I0603 11:06:15.287436   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 50/120
	I0603 11:06:16.288673   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 51/120
	I0603 11:06:17.290074   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 52/120
	I0603 11:06:18.291372   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 53/120
	I0603 11:06:19.293500   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 54/120
	I0603 11:06:20.295090   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 55/120
	I0603 11:06:21.296417   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 56/120
	I0603 11:06:22.297578   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 57/120
	I0603 11:06:23.298893   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 58/120
	I0603 11:06:24.300782   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 59/120
	I0603 11:06:25.303486   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 60/120
	I0603 11:06:26.304936   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 61/120
	I0603 11:06:27.306161   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 62/120
	I0603 11:06:28.307432   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 63/120
	I0603 11:06:29.309585   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 64/120
	I0603 11:06:30.311270   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 65/120
	I0603 11:06:31.312617   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 66/120
	I0603 11:06:32.314089   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 67/120
	I0603 11:06:33.315553   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 68/120
	I0603 11:06:34.317049   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 69/120
	I0603 11:06:35.319273   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 70/120
	I0603 11:06:36.320572   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 71/120
	I0603 11:06:37.321883   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 72/120
	I0603 11:06:38.323241   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 73/120
	I0603 11:06:39.324602   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 74/120
	I0603 11:06:40.326516   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 75/120
	I0603 11:06:41.327830   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 76/120
	I0603 11:06:42.329259   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 77/120
	I0603 11:06:43.330655   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 78/120
	I0603 11:06:44.331931   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 79/120
	I0603 11:06:45.334197   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 80/120
	I0603 11:06:46.335579   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 81/120
	I0603 11:06:47.336981   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 82/120
	I0603 11:06:48.338181   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 83/120
	I0603 11:06:49.339560   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 84/120
	I0603 11:06:50.341274   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 85/120
	I0603 11:06:51.342718   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 86/120
	I0603 11:06:52.344041   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 87/120
	I0603 11:06:53.345405   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 88/120
	I0603 11:06:54.346888   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 89/120
	I0603 11:06:55.348730   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 90/120
	I0603 11:06:56.350294   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 91/120
	I0603 11:06:57.351734   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 92/120
	I0603 11:06:58.353018   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 93/120
	I0603 11:06:59.354400   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 94/120
	I0603 11:07:00.356004   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 95/120
	I0603 11:07:01.357445   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 96/120
	I0603 11:07:02.359670   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 97/120
	I0603 11:07:03.361064   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 98/120
	I0603 11:07:04.362612   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 99/120
	I0603 11:07:05.364075   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 100/120
	I0603 11:07:06.365520   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 101/120
	I0603 11:07:07.367420   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 102/120
	I0603 11:07:08.368646   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 103/120
	I0603 11:07:09.369923   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 104/120
	I0603 11:07:10.371405   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 105/120
	I0603 11:07:11.372677   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 106/120
	I0603 11:07:12.373998   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 107/120
	I0603 11:07:13.375364   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 108/120
	I0603 11:07:14.376740   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 109/120
	I0603 11:07:15.379003   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 110/120
	I0603 11:07:16.380433   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 111/120
	I0603 11:07:17.381850   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 112/120
	I0603 11:07:18.383063   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 113/120
	I0603 11:07:19.384349   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 114/120
	I0603 11:07:20.386370   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 115/120
	I0603 11:07:21.388404   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 116/120
	I0603 11:07:22.389556   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 117/120
	I0603 11:07:23.391024   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 118/120
	I0603 11:07:24.392391   31593 main.go:141] libmachine: (ha-683480-m03) Waiting for machine to stop 119/120
	I0603 11:07:25.392918   31593 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 11:07:25.392968   31593 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 11:07:25.395003   31593 out.go:177] 
	W0603 11:07:25.396509   31593 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 11:07:25.396527   31593 out.go:239] * 
	* 
	W0603 11:07:25.398537   31593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 11:07:25.399865   31593 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-683480 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683480 --wait=true -v=7 --alsologtostderr
E0603 11:08:35.084031   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:10:19.213095   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-683480 --wait=true -v=7 --alsologtostderr: exit status 80 (4m19.284576458s)

                                                
                                                
-- stdout --
	* [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	* Updating the running kvm2 "ha-683480" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-683480-m02" control-plane node in "ha-683480" cluster
	* Restarting existing kvm2 VM for "ha-683480-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.116
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.116
	* Verifying Kubernetes components...
	
	* Starting "ha-683480-m03" control-plane node in "ha-683480" cluster
	* Restarting existing kvm2 VM for "ha-683480-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.116,192.168.39.127
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.116
	  - env NO_PROXY=192.168.39.116,192.168.39.127
	* Verifying Kubernetes components...
	
	* Starting "ha-683480-m04" worker node in "ha-683480" cluster
	* Restarting existing kvm2 VM for "ha-683480-m04" ...
	* Updating the running kvm2 "ha-683480-m04" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:07:25.442619   32123 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:07:25.442855   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.442863   32123 out.go:304] Setting ErrFile to fd 2...
	I0603 11:07:25.442866   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.443101   32123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:07:25.443633   32123 out.go:298] Setting JSON to false
	I0603 11:07:25.444536   32123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2990,"bootTime":1717409855,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:07:25.444597   32123 start.go:139] virtualization: kvm guest
	I0603 11:07:25.446966   32123 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:07:25.448223   32123 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:07:25.448228   32123 notify.go:220] Checking for updates...
	I0603 11:07:25.449410   32123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:07:25.450661   32123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:07:25.451979   32123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:07:25.453271   32123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:07:25.454412   32123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:07:25.456024   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:25.456119   32123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:07:25.456503   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.456543   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.477478   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0603 11:07:25.477915   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.478527   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.478546   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.478926   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.479145   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.513767   32123 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:07:25.515068   32123 start.go:297] selected driver: kvm2
	I0603 11:07:25.515093   32123 start.go:901] validating driver "kvm2" against &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.515277   32123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:07:25.515652   32123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.515720   32123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:07:25.531105   32123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:07:25.531742   32123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:07:25.531818   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:07:25.531832   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:07:25.531896   32123 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.532029   32123 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.534347   32123 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 11:07:25.535583   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:07:25.535617   32123 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:07:25.535624   32123 cache.go:56] Caching tarball of preloaded images
	I0603 11:07:25.535711   32123 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:07:25.535722   32123 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:07:25.535838   32123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:07:25.536024   32123 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:07:25.536061   32123 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-683480"
	I0603 11:07:25.536075   32123 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:07:25.536082   32123 fix.go:54] fixHost starting: 
	I0603 11:07:25.536327   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.536360   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.550171   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0603 11:07:25.550615   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.551053   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.551086   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.551439   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.551627   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.551779   32123 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:07:25.553075   32123 fix.go:112] recreateIfNeeded on ha-683480: state=Running err=<nil>
	W0603 11:07:25.553103   32123 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:07:25.555822   32123 out.go:177] * Updating the running kvm2 "ha-683480" VM ...
	I0603 11:07:25.557278   32123 machine.go:94] provisionDockerMachine start ...
	I0603 11:07:25.557297   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.557457   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.559729   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560164   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.560190   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.560397   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560552   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560663   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.560826   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.560998   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.561008   32123 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:07:25.664232   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.664262   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664503   32123 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 11:07:25.664525   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664710   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.667431   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667816   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.667840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667952   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.668123   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668269   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668398   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.668564   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.668736   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.668760   32123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 11:07:25.789898   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.789922   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.792463   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.792857   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.792879   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.793043   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.793241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793390   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793523   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.793674   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.793830   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.793845   32123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:07:25.895742   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:07:25.895783   32123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:07:25.895804   32123 buildroot.go:174] setting up certificates
	I0603 11:07:25.895816   32123 provision.go:84] configureAuth start
	I0603 11:07:25.895832   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.896116   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:07:25.898621   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.898971   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.898995   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.899148   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.901289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901702   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.901727   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901852   32123 provision.go:143] copyHostCerts
	I0603 11:07:25.901884   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.901920   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:07:25.901937   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.902006   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:07:25.902090   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902108   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:07:25.902113   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902139   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:07:25.902179   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902197   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:07:25.902206   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902235   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:07:25.902300   32123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 11:07:26.059416   32123 provision.go:177] copyRemoteCerts
	I0603 11:07:26.059473   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:07:26.059498   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.062155   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062608   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.062638   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062833   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.062994   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.063165   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.063290   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:07:26.146746   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:07:26.146810   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:07:26.174269   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:07:26.174353   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 11:07:26.199835   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:07:26.199895   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:07:26.226453   32123 provision.go:87] duration metric: took 330.620757ms to configureAuth
	I0603 11:07:26.226484   32123 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:07:26.226787   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:26.226897   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.229443   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.229819   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.229840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.230039   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.230233   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230407   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230524   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.230689   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:26.230900   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:26.230931   32123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:08:57.164538   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:08:57.164576   32123 machine.go:97] duration metric: took 1m31.607286329s to provisionDockerMachine
	I0603 11:08:57.164592   32123 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 11:08:57.164608   32123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:08:57.164635   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.165008   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:08:57.165037   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.168289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168694   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.168717   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168888   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.169136   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.169285   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.169407   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.251439   32123 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:08:57.255917   32123 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:08:57.255939   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:08:57.255991   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:08:57.256063   32123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:08:57.256072   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:08:57.256151   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:08:57.266429   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:08:57.290924   32123 start.go:296] duration metric: took 126.319085ms for postStartSetup
	I0603 11:08:57.290966   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.291281   32123 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 11:08:57.291304   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.293927   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294426   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.294457   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294611   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.294774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.294937   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.295094   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	W0603 11:08:57.373411   32123 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 11:08:57.373439   32123 fix.go:56] duration metric: took 1m31.837357572s for fixHost
	I0603 11:08:57.373460   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.375924   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376280   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.376299   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376459   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.376624   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376895   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.377010   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:08:57.377178   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:08:57.377187   32123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 11:08:57.476064   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412937.450872254
	
	I0603 11:08:57.476091   32123 fix.go:216] guest clock: 1717412937.450872254
	I0603 11:08:57.476097   32123 fix.go:229] Guest: 2024-06-03 11:08:57.450872254 +0000 UTC Remote: 2024-06-03 11:08:57.373446324 +0000 UTC m=+91.964564811 (delta=77.42593ms)
	I0603 11:08:57.476121   32123 fix.go:200] guest clock delta is within tolerance: 77.42593ms
	I0603 11:08:57.476126   32123 start.go:83] releasing machines lock for "ha-683480", held for 1m31.940055627s
	I0603 11:08:57.476143   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.476451   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:08:57.478829   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479315   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.479344   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479439   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480003   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480192   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480283   32123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:08:57.480338   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.480387   32123 ssh_runner.go:195] Run: cat /version.json
	I0603 11:08:57.480410   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.482838   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483029   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483284   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483308   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483544   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483621   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483692   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483755   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483826   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.483891   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.484014   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.483975   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.561311   32123 ssh_runner.go:195] Run: systemctl --version
	I0603 11:08:57.583380   32123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:08:57.752344   32123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:08:57.758604   32123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:08:57.758677   32123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:08:57.768166   32123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:08:57.768192   32123 start.go:494] detecting cgroup driver to use...
	I0603 11:08:57.768244   32123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:08:57.784730   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:08:57.799955   32123 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:08:57.800006   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:08:57.813623   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:08:57.851455   32123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:08:57.999998   32123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:08:58.161448   32123 docker.go:233] disabling docker service ...
	I0603 11:08:58.161527   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:08:58.178129   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:08:58.192081   32123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:08:58.341394   32123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:08:58.490223   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:08:58.504113   32123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:08:58.524449   32123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:08:58.524509   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.535157   32123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:08:58.535218   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.545448   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.556068   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.566406   32123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:08:58.577992   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.588771   32123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.599846   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.611253   32123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:08:58.621549   32123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:08:58.631028   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:08:58.773906   32123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:09:00.429585   32123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.655639068s)
	I0603 11:09:00.429609   32123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:09:00.429650   32123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:09:00.435134   32123 start.go:562] Will wait 60s for crictl version
	I0603 11:09:00.435178   32123 ssh_runner.go:195] Run: which crictl
	I0603 11:09:00.438893   32123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:09:00.479635   32123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:09:00.479716   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.508784   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.540764   32123 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:09:00.542271   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:09:00.544914   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545320   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:09:00.545352   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545521   32123 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:09:00.550299   32123 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:09:00.550441   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:09:00.550491   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.600204   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.600227   32123 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:09:00.600277   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.636579   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.636599   32123 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:09:00.636614   32123 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 11:09:00.636714   32123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:09:00.636779   32123 ssh_runner.go:195] Run: crio config
	I0603 11:09:00.686623   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:09:00.686644   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:09:00.686656   32123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:09:00.686688   32123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:09:00.686867   32123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:09:00.686895   32123 kube-vip.go:115] generating kube-vip config ...
	I0603 11:09:00.686945   32123 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:09:00.699149   32123 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:09:00.699266   32123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:09:00.699330   32123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:09:00.709452   32123 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:09:00.709523   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 11:09:00.719357   32123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 11:09:00.737341   32123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:09:00.753811   32123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 11:09:00.770330   32123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:09:00.788590   32123 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:09:00.792380   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:09:00.938633   32123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:09:00.954663   32123 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 11:09:00.954680   32123 certs.go:194] generating shared ca certs ...
	I0603 11:09:00.954695   32123 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:00.954853   32123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:09:00.954909   32123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:09:00.954920   32123 certs.go:256] generating profile certs ...
	I0603 11:09:00.954999   32123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:09:00.955025   32123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b
	I0603 11:09:00.955066   32123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:09:01.074478   32123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b ...
	I0603 11:09:01.074507   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b: {Name:mk90aaec59622d5605c25e50123cffa72ad4fa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074671   32123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b ...
	I0603 11:09:01.074682   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b: {Name:mke0afd6700871b17032b676d43a247d77a3697b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074747   32123 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:09:01.074893   32123 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:09:01.075011   32123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:09:01.075026   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:09:01.075095   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:09:01.075116   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:09:01.075128   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:09:01.075141   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:09:01.075153   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:09:01.075165   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:09:01.075177   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:09:01.075228   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:09:01.075265   32123 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:09:01.075274   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:09:01.075293   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:09:01.075314   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:09:01.075334   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:09:01.075369   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:09:01.075397   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.075412   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.075423   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.075983   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:09:01.101929   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:09:01.126780   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:09:01.151427   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:09:01.175069   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 11:09:01.198877   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:09:01.221819   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:09:01.245043   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:09:01.268520   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:09:01.292182   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:09:01.316481   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:09:01.340006   32123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:09:01.356593   32123 ssh_runner.go:195] Run: openssl version
	I0603 11:09:01.362366   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:09:01.373561   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.377979   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.378028   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.383817   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:09:01.393943   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:09:01.404966   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409235   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409284   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.414756   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:09:01.425087   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:09:01.436313   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441074   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441123   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.446671   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:09:01.456214   32123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:09:01.460571   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:09:01.466138   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:09:01.471498   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:09:01.476939   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:09:01.482385   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:09:01.487689   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:09:01.493220   32123 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:09:01.493322   32123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:09:01.493398   32123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:09:01.531471   32123 cri.go:89] found id: "f5e2a3e9cad2d3850b8c7cc462cbf093f62660cc5ed878de3fb697df8f7e849d"
	I0603 11:09:01.531494   32123 cri.go:89] found id: "0a2affa40fe5e43b29d1f89794f211acafce31faab220ad3254ea3ae9b81455e"
	I0603 11:09:01.531498   32123 cri.go:89] found id: "f1ac445f3c0b1f52f27caee3ee4ec90408d1b4670e8e93efdec8e3902e0de9b8"
	I0603 11:09:01.531500   32123 cri.go:89] found id: "9c8a6029966c17e71158a2045e39b094dfec93e361d3cd11049c550057d16295"
	I0603 11:09:01.531503   32123 cri.go:89] found id: "b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60"
	I0603 11:09:01.531507   32123 cri.go:89] found id: "fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668"
	I0603 11:09:01.531509   32123 cri.go:89] found id: "aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8"
	I0603 11:09:01.531512   32123 cri.go:89] found id: "995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b"
	I0603 11:09:01.531514   32123 cri.go:89] found id: "bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f"
	I0603 11:09:01.531520   32123 cri.go:89] found id: "2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda"
	I0603 11:09:01.531526   32123 cri.go:89] found id: "3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59"
	I0603 11:09:01.531530   32123 cri.go:89] found id: "c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556"
	I0603 11:09:01.531535   32123 cri.go:89] found id: "09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad"
	I0603 11:09:01.531539   32123 cri.go:89] found id: "200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79"
	I0603 11:09:01.531545   32123 cri.go:89] found id: ""
	I0603 11:09:01.531584   32123 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-683480 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-683480
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 logs -n 25: (1.784090264s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m04 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp testdata/cp-test.txt                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m03 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683480 node stop m02 -v=7                                                     | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683480 node start m02 -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480 -v=7                                                           | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-683480 -v=7                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-683480 --wait=true -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:11 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:07:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:07:25.442619   32123 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:07:25.442855   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.442863   32123 out.go:304] Setting ErrFile to fd 2...
	I0603 11:07:25.442866   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.443101   32123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:07:25.443633   32123 out.go:298] Setting JSON to false
	I0603 11:07:25.444536   32123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2990,"bootTime":1717409855,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:07:25.444597   32123 start.go:139] virtualization: kvm guest
	I0603 11:07:25.446966   32123 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:07:25.448223   32123 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:07:25.448228   32123 notify.go:220] Checking for updates...
	I0603 11:07:25.449410   32123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:07:25.450661   32123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:07:25.451979   32123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:07:25.453271   32123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:07:25.454412   32123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:07:25.456024   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:25.456119   32123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:07:25.456503   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.456543   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.477478   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0603 11:07:25.477915   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.478527   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.478546   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.478926   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.479145   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.513767   32123 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:07:25.515068   32123 start.go:297] selected driver: kvm2
	I0603 11:07:25.515093   32123 start.go:901] validating driver "kvm2" against &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.515277   32123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:07:25.515652   32123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.515720   32123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:07:25.531105   32123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:07:25.531742   32123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:07:25.531818   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:07:25.531832   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:07:25.531896   32123 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.532029   32123 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.534347   32123 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 11:07:25.535583   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:07:25.535617   32123 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:07:25.535624   32123 cache.go:56] Caching tarball of preloaded images
	I0603 11:07:25.535711   32123 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:07:25.535722   32123 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:07:25.535838   32123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:07:25.536024   32123 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:07:25.536061   32123 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-683480"
	I0603 11:07:25.536075   32123 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:07:25.536082   32123 fix.go:54] fixHost starting: 
	I0603 11:07:25.536327   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.536360   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.550171   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0603 11:07:25.550615   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.551053   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.551086   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.551439   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.551627   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.551779   32123 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:07:25.553075   32123 fix.go:112] recreateIfNeeded on ha-683480: state=Running err=<nil>
	W0603 11:07:25.553103   32123 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:07:25.555822   32123 out.go:177] * Updating the running kvm2 "ha-683480" VM ...
	I0603 11:07:25.557278   32123 machine.go:94] provisionDockerMachine start ...
	I0603 11:07:25.557297   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.557457   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.559729   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560164   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.560190   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.560397   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560552   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560663   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.560826   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.560998   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.561008   32123 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:07:25.664232   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.664262   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664503   32123 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 11:07:25.664525   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664710   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.667431   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667816   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.667840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667952   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.668123   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668269   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668398   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.668564   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.668736   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.668760   32123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 11:07:25.789898   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.789922   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.792463   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.792857   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.792879   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.793043   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.793241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793390   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793523   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.793674   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.793830   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.793845   32123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:07:25.895742   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:07:25.895783   32123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:07:25.895804   32123 buildroot.go:174] setting up certificates
	I0603 11:07:25.895816   32123 provision.go:84] configureAuth start
	I0603 11:07:25.895832   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.896116   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:07:25.898621   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.898971   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.898995   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.899148   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.901289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901702   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.901727   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901852   32123 provision.go:143] copyHostCerts
	I0603 11:07:25.901884   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.901920   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:07:25.901937   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.902006   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:07:25.902090   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902108   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:07:25.902113   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902139   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:07:25.902179   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902197   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:07:25.902206   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902235   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:07:25.902300   32123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 11:07:26.059416   32123 provision.go:177] copyRemoteCerts
	I0603 11:07:26.059473   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:07:26.059498   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.062155   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062608   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.062638   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062833   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.062994   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.063165   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.063290   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:07:26.146746   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:07:26.146810   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:07:26.174269   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:07:26.174353   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 11:07:26.199835   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:07:26.199895   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:07:26.226453   32123 provision.go:87] duration metric: took 330.620757ms to configureAuth
	I0603 11:07:26.226484   32123 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:07:26.226787   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:26.226897   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.229443   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.229819   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.229840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.230039   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.230233   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230407   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230524   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.230689   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:26.230900   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:26.230931   32123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:08:57.164538   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:08:57.164576   32123 machine.go:97] duration metric: took 1m31.607286329s to provisionDockerMachine
	I0603 11:08:57.164592   32123 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 11:08:57.164608   32123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:08:57.164635   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.165008   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:08:57.165037   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.168289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168694   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.168717   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168888   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.169136   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.169285   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.169407   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.251439   32123 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:08:57.255917   32123 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:08:57.255939   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:08:57.255991   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:08:57.256063   32123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:08:57.256072   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:08:57.256151   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:08:57.266429   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:08:57.290924   32123 start.go:296] duration metric: took 126.319085ms for postStartSetup
	I0603 11:08:57.290966   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.291281   32123 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 11:08:57.291304   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.293927   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294426   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.294457   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294611   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.294774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.294937   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.295094   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	W0603 11:08:57.373411   32123 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 11:08:57.373439   32123 fix.go:56] duration metric: took 1m31.837357572s for fixHost
	I0603 11:08:57.373460   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.375924   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376280   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.376299   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376459   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.376624   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376895   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.377010   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:08:57.377178   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:08:57.377187   32123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:08:57.476064   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412937.450872254
	
	I0603 11:08:57.476091   32123 fix.go:216] guest clock: 1717412937.450872254
	I0603 11:08:57.476097   32123 fix.go:229] Guest: 2024-06-03 11:08:57.450872254 +0000 UTC Remote: 2024-06-03 11:08:57.373446324 +0000 UTC m=+91.964564811 (delta=77.42593ms)
	I0603 11:08:57.476121   32123 fix.go:200] guest clock delta is within tolerance: 77.42593ms
	I0603 11:08:57.476126   32123 start.go:83] releasing machines lock for "ha-683480", held for 1m31.940055627s
	I0603 11:08:57.476143   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.476451   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:08:57.478829   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479315   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.479344   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479439   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480003   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480192   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480283   32123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:08:57.480338   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.480387   32123 ssh_runner.go:195] Run: cat /version.json
	I0603 11:08:57.480410   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.482838   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483029   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483284   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483308   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483544   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483621   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483692   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483755   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483826   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.483891   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.484014   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.483975   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.561311   32123 ssh_runner.go:195] Run: systemctl --version
	I0603 11:08:57.583380   32123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:08:57.752344   32123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:08:57.758604   32123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:08:57.758677   32123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:08:57.768166   32123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:08:57.768192   32123 start.go:494] detecting cgroup driver to use...
	I0603 11:08:57.768244   32123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:08:57.784730   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:08:57.799955   32123 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:08:57.800006   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:08:57.813623   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:08:57.851455   32123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:08:57.999998   32123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:08:58.161448   32123 docker.go:233] disabling docker service ...
	I0603 11:08:58.161527   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:08:58.178129   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:08:58.192081   32123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:08:58.341394   32123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:08:58.490223   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:08:58.504113   32123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:08:58.524449   32123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:08:58.524509   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.535157   32123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:08:58.535218   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.545448   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.556068   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.566406   32123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:08:58.577992   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.588771   32123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.599846   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.611253   32123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:08:58.621549   32123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:08:58.631028   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:08:58.773906   32123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:09:00.429585   32123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.655639068s)
	I0603 11:09:00.429609   32123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:09:00.429650   32123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:09:00.435134   32123 start.go:562] Will wait 60s for crictl version
	I0603 11:09:00.435178   32123 ssh_runner.go:195] Run: which crictl
	I0603 11:09:00.438893   32123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:09:00.479635   32123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:09:00.479716   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.508784   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.540764   32123 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:09:00.542271   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:09:00.544914   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545320   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:09:00.545352   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545521   32123 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:09:00.550299   32123 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:09:00.550441   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:09:00.550491   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.600204   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.600227   32123 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:09:00.600277   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.636579   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.636599   32123 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:09:00.636614   32123 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 11:09:00.636714   32123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:09:00.636779   32123 ssh_runner.go:195] Run: crio config
	I0603 11:09:00.686623   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:09:00.686644   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:09:00.686656   32123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:09:00.686688   32123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:09:00.686867   32123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:09:00.686895   32123 kube-vip.go:115] generating kube-vip config ...
	I0603 11:09:00.686945   32123 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:09:00.699149   32123 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:09:00.699266   32123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:09:00.699330   32123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:09:00.709452   32123 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:09:00.709523   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 11:09:00.719357   32123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 11:09:00.737341   32123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:09:00.753811   32123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 11:09:00.770330   32123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:09:00.788590   32123 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:09:00.792380   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:09:00.938633   32123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:09:00.954663   32123 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 11:09:00.954680   32123 certs.go:194] generating shared ca certs ...
	I0603 11:09:00.954695   32123 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:00.954853   32123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:09:00.954909   32123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:09:00.954920   32123 certs.go:256] generating profile certs ...
	I0603 11:09:00.954999   32123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:09:00.955025   32123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b
	I0603 11:09:00.955066   32123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:09:01.074478   32123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b ...
	I0603 11:09:01.074507   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b: {Name:mk90aaec59622d5605c25e50123cffa72ad4fa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074671   32123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b ...
	I0603 11:09:01.074682   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b: {Name:mke0afd6700871b17032b676d43a247d77a3697b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074747   32123 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:09:01.074893   32123 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:09:01.075011   32123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:09:01.075026   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:09:01.075095   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:09:01.075116   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:09:01.075128   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:09:01.075141   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:09:01.075153   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:09:01.075165   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:09:01.075177   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:09:01.075228   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:09:01.075265   32123 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:09:01.075274   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:09:01.075293   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:09:01.075314   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:09:01.075334   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:09:01.075369   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:09:01.075397   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.075412   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.075423   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.075983   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:09:01.101929   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:09:01.126780   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:09:01.151427   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:09:01.175069   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 11:09:01.198877   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:09:01.221819   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:09:01.245043   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:09:01.268520   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:09:01.292182   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:09:01.316481   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:09:01.340006   32123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:09:01.356593   32123 ssh_runner.go:195] Run: openssl version
	I0603 11:09:01.362366   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:09:01.373561   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.377979   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.378028   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.383817   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:09:01.393943   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:09:01.404966   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409235   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409284   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.414756   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:09:01.425087   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:09:01.436313   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441074   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441123   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.446671   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:09:01.456214   32123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:09:01.460571   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:09:01.466138   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:09:01.471498   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:09:01.476939   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:09:01.482385   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:09:01.487689   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:09:01.493220   32123 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:09:01.493322   32123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:09:01.493398   32123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:09:01.531471   32123 cri.go:89] found id: "f5e2a3e9cad2d3850b8c7cc462cbf093f62660cc5ed878de3fb697df8f7e849d"
	I0603 11:09:01.531494   32123 cri.go:89] found id: "0a2affa40fe5e43b29d1f89794f211acafce31faab220ad3254ea3ae9b81455e"
	I0603 11:09:01.531498   32123 cri.go:89] found id: "f1ac445f3c0b1f52f27caee3ee4ec90408d1b4670e8e93efdec8e3902e0de9b8"
	I0603 11:09:01.531500   32123 cri.go:89] found id: "9c8a6029966c17e71158a2045e39b094dfec93e361d3cd11049c550057d16295"
	I0603 11:09:01.531503   32123 cri.go:89] found id: "b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60"
	I0603 11:09:01.531507   32123 cri.go:89] found id: "fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668"
	I0603 11:09:01.531509   32123 cri.go:89] found id: "aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8"
	I0603 11:09:01.531512   32123 cri.go:89] found id: "995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b"
	I0603 11:09:01.531514   32123 cri.go:89] found id: "bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f"
	I0603 11:09:01.531520   32123 cri.go:89] found id: "2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda"
	I0603 11:09:01.531526   32123 cri.go:89] found id: "3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59"
	I0603 11:09:01.531530   32123 cri.go:89] found id: "c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556"
	I0603 11:09:01.531535   32123 cri.go:89] found id: "09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad"
	I0603 11:09:01.531539   32123 cri.go:89] found id: "200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79"
	I0603 11:09:01.531545   32123 cri.go:89] found id: ""
	I0603 11:09:01.531584   32123 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.355450250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09b71908-c2a2-4f57-b271-d79dca9deb97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.356347645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09b71908-c2a2-4f57-b271-d79dca9deb97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.389489590Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=f6237d89-7de0-40ac-b201-a56de405efb4 name=/runtime.v1.RuntimeService/Status
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.389566606Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f6237d89-7de0-40ac-b201-a56de405efb4 name=/runtime.v1.RuntimeService/Status
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.407743161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d816e20-d92c-4f84-8fb5-a8ef2861636e name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.407836332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d816e20-d92c-4f84-8fb5-a8ef2861636e name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.408705317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7522821e-3a3f-4bfd-8635-ebaa501cc73a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.409365199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413105409343251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7522821e-3a3f-4bfd-8635-ebaa501cc73a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.409897654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdb7d1a5-6811-43e0-b0dd-b28f247558f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.409964835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdb7d1a5-6811-43e0-b0dd-b28f247558f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.410513714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdb7d1a5-6811-43e0-b0dd-b28f247558f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.452533046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3afaf677-0fa5-4049-a562-0aa8701692f8 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.452636443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3afaf677-0fa5-4049-a562-0aa8701692f8 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.453934578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08cd5436-d514-4f69-905a-7e1ad13faaa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.454715749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413105454688225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08cd5436-d514-4f69-905a-7e1ad13faaa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.455671589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0ee304a-f44d-4c9d-804c-6d864bc2049a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.455730521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0ee304a-f44d-4c9d-804c-6d864bc2049a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.456946000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0ee304a-f44d-4c9d-804c-6d864bc2049a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.501927338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=edf411e9-70a8-44f6-a6fb-72e1b29cae9b name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.502132623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edf411e9-70a8-44f6-a6fb-72e1b29cae9b name=/runtime.v1.RuntimeService/Version
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.503243862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b4f7833-5630-4c81-b7d9-18148207a745 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.504065499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413105503973035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b4f7833-5630-4c81-b7d9-18148207a745 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.504517088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c6c459c-da0d-459b-a54e-078c7dbaa3eb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.504568881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c6c459c-da0d-459b-a54e-078c7dbaa3eb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:11:45 ha-683480 crio[3818]: time="2024-06-03 11:11:45.504972743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c6c459c-da0d-459b-a54e-078c7dbaa3eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b3a10717b253a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   a113d054f5421       storage-provisioner
	c3ea180b82167       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   ffe70c296995b       kindnet-zxhbp
	4b0d6949ee1d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   a113d054f5421       storage-provisioner
	0376a5d0c8b82       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            3                   eef7acb133025       kube-apiserver-ha-683480
	f11bba0fe671e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   1be973d393fd9       kube-controller-manager-ha-683480
	a2616ab08c12c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   43cc18e969581       busybox-fc5497c4f-mvpcm
	e6affd24ffc04       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   312ee2bc45a8a       kube-vip-ha-683480
	48e4f287c2039       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   0bb95efa9b554       kube-proxy-4d9w5
	753900b199b96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   1084ea2c9f83b       coredns-7db6d8ff4d-8tqf9
	cc8f63fef0029       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   7610af85710c6       coredns-7db6d8ff4d-nff86
	52b0704efa37c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   ffe70c296995b       kindnet-zxhbp
	127d736575af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   29eec1a82f9d9       etcd-ha-683480
	031c8a2316fc4       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   751825866bea3       kube-scheduler-ha-683480
	71115f2e0e5d4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   eef7acb133025       kube-apiserver-ha-683480
	9034d276d18e7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   1be973d393fd9       kube-controller-manager-ha-683480
	348419ceaffc3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   d32d79da82b93       busybox-fc5497c4f-mvpcm
	fdbecc258023e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   62bef471ea4a4       coredns-7db6d8ff4d-8tqf9
	aa5e3aca86502       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   41da25dac8c48       coredns-7db6d8ff4d-nff86
	bcb102231e3a6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      14 minutes ago       Exited              kube-proxy                0                   6812552c2a4ab       kube-proxy-4d9w5
	c282307764128       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      14 minutes ago       Exited              kube-scheduler            0                   860a510241592       kube-scheduler-ha-683480
	09fff5459f24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   86b1d4bcd541d       etcd-ha-683480
	
	
	==> coredns [753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5] <==
	Trace[1740610122]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46100->10.96.0.1:443: read: connection reset by peer 12670ms (11:09:32.817)
	Trace[1740610122]: [12.670863276s] [12.670863276s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46100->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8] <==
	[INFO] 10.244.1.2:59258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009417s
	[INFO] 10.244.0.4:59067 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001995491s
	[INFO] 10.244.0.4:33658 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077694s
	[INFO] 10.244.2.2:56134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146189s
	[INFO] 10.244.2.2:42897 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001874015s
	[INFO] 10.244.2.2:49555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079926s
	[INFO] 10.244.1.2:49977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098794s
	[INFO] 10.244.1.2:55522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070995s
	[INFO] 10.244.1.2:47166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064061s
	[INFO] 10.244.0.4:52772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107779s
	[INFO] 10.244.0.4:34695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110706s
	[INFO] 10.244.2.2:47248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010537s
	[INFO] 10.244.1.2:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175618s
	[INFO] 10.244.1.2:56731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211211s
	[INFO] 10.244.1.2:47156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137189s
	[INFO] 10.244.1.2:57441 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161046s
	[INFO] 10.244.0.4:45937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064288s
	[INFO] 10.244.0.4:50125 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003887s
	[INFO] 10.244.2.2:38937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134308s
	[INFO] 10.244.2.2:34039 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085147s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1995040395]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:09:22.492) (total time: 10325ms):
	Trace[1995040395]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer 10325ms (11:09:32.817)
	Trace[1995040395]: [10.325900852s] [10.325900852s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668] <==
	[INFO] 10.244.1.2:60397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013328418s
	[INFO] 10.244.1.2:34848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138348s
	[INFO] 10.244.0.4:53254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147619s
	[INFO] 10.244.0.4:37575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103362s
	[INFO] 10.244.0.4:54948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181862s
	[INFO] 10.244.0.4:39944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365258s
	[INFO] 10.244.0.4:55239 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017828s
	[INFO] 10.244.0.4:57467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097919s
	[INFO] 10.244.2.2:35971 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096406s
	[INFO] 10.244.2.2:38423 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334812s
	[INFO] 10.244.2.2:42352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153771s
	[INFO] 10.244.2.2:40734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099488s
	[INFO] 10.244.2.2:34598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136946s
	[INFO] 10.244.1.2:54219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087067s
	[INFO] 10.244.0.4:58452 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093948s
	[INFO] 10.244.0.4:35784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061499s
	[INFO] 10.244.2.2:54391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149082s
	[INFO] 10.244.2.2:39850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109311s
	[INFO] 10.244.2.2:39330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101321s
	[INFO] 10.244.0.4:56550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137331s
	[INFO] 10.244.0.4:42317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097716s
	[INFO] 10.244.2.2:34210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106975s
	[INFO] 10.244.2.2:40755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028708s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-683480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:11:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-683480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1505c2b59bc4afb8c36148f46c99e6c
	  System UUID:                f1505c2b-59bc-4afb-8c36-148f46c99e6c
	  Boot ID:                    acccd468-078d-403e-a5b4-d10d97594cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mvpcm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-8tqf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-nff86             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-683480                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-zxhbp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-683480             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-683480    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4d9w5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-683480             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-683480                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-683480 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Warning  ContainerGCFailed        2m45s (x2 over 3m45s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           113s                   node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           101s                   node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           26s                    node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	
	
	Name:               ha-683480-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:59:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-683480-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d1a1fca79484f629cf7b8fc1955281b
	  System UUID:                2d1a1fca-7948-4f62-9cf7-b8fc1955281b
	  Boot ID:                    2ccc0715-41df-4f70-950f-db6ed24fa46f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldtcf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-683480-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-t6fxj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-683480-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-683480-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-q2xfn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-683480-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-683480-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  NodeNotReady             9m9s                   node-controller  Node ha-683480-m02 status is now: NodeNotReady
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                   node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           102s                   node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	
	
	Name:               ha-683480-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_00_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:11:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:11:23 +0000   Mon, 03 Jun 2024 11:10:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:11:23 +0000   Mon, 03 Jun 2024 11:10:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:11:23 +0000   Mon, 03 Jun 2024 11:10:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:11:23 +0000   Mon, 03 Jun 2024 11:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    ha-683480-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bb33c5cad548f785d23d226c699411
	  System UUID:                b7bb33c5-cad5-48f7-85d2-3d226c699411
	  Boot ID:                    53db0522-d2df-40bd-80de-8905390e28ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ngf6n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-683480-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-zsfhr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-683480-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-683480-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-txnhc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-683480-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-683480-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-683480-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal   RegisteredNode           114s               node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	  Normal   NodeNotReady             73s                node-controller  Node ha-683480-m03 status is now: NodeNotReady
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  53s (x2 over 53s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x2 over 53s)  kubelet          Node ha-683480-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x2 over 53s)  kubelet          Node ha-683480-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 53s                kubelet          Node ha-683480-m03 has been rebooted, boot id: 53db0522-d2df-40bd-80de-8905390e28ab
	  Normal   NodeReady                53s                kubelet          Node ha-683480-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-683480-m03 event: Registered Node ha-683480-m03 in Controller
	
	
	Name:               ha-683480-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:01:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-683480-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0705544cf414e31abf26e0a013cd6bf
	  System UUID:                d0705544-cf41-4e31-abf2-6e0a013cd6bf
	  Boot ID:                    125ac719-6c97-4e76-9440-99e7f62b9e2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24p87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2kkf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-683480-m04 status is now: NodeReady
	  Normal  RegisteredNode           114s               node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           102s               node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeNotReady             73s                node-controller  Node ha-683480-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           27s                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.363785] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051848] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.189543] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.108878] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.262803] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077728] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +5.011635] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.054415] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.849379] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.148784] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[Jun 3 10:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057593] kauditd_printk_skb: 34 callbacks suppressed
	[Jun 3 10:59] kauditd_printk_skb: 30 callbacks suppressed
	[Jun 3 11:08] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +0.155437] systemd-fstab-generator[3744]: Ignoring "noauto" option for root device
	[  +0.187815] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.153201] systemd-fstab-generator[3770]: Ignoring "noauto" option for root device
	[  +0.283321] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[Jun 3 11:09] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +6.669814] kauditd_printk_skb: 122 callbacks suppressed
	[ +17.441291] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.224130] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.398569] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad] <==
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.361652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.180966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T11:07:26.361662Z","caller":"traceutil/trace.go:171","msg":"trace[1176087128] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"298.197396ms","start":"2024-06-03T11:07:26.063461Z","end":"2024-06-03T11:07:26.361659Z","steps":["trace[1176087128] 'agreement among raft nodes before linearized reading'  (duration: 298.187306ms)"],"step_count":1}
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.410189Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:07:26.410266Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:07:26.410368Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"8b2d6b6d639b2fdb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-03T11:07:26.410582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410665Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410722Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410891Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410925Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410958Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411127Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411224Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411251Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.414171Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.414318Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.41435Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-683480","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.116:2380"],"advertise-client-urls":["https://192.168.39.116:2379"]}
	
	
	==> etcd [127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e] <==
	{"level":"warn","ts":"2024-06-03T11:10:49.257236Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:49.980652Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.131:2380/version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:49.980792Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:53.982327Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.131:2380/version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:53.982432Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:54.243037Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:54.25753Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:57.98457Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.131:2380/version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:57.984633Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:59.243189Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:10:59.258575Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:11:01.986709Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.131:2380/version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:11:01.986823Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4f87f407f126f7fc","error":"Get \"https://192.168.39.131:2380/version\": dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-03T11:11:02.679204Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:02.707663Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8b2d6b6d639b2fdb","to":"4f87f407f126f7fc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-03T11:11:02.707834Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:02.728429Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8b2d6b6d639b2fdb","to":"4f87f407f126f7fc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-03T11:11:02.728489Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:02.729503Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:02.739806Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:53990","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-03T11:11:02.740926Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:02.74806Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-03T11:11:02.751524Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:54014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-03T11:11:04.243889Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:11:04.259196Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	
	
	==> kernel <==
	 11:11:46 up 15 min,  0 users,  load average: 0.21, 0.45, 0.26
	Linux ha-683480 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1] <==
	I0603 11:09:08.740706       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 11:09:08.743072       1 main.go:107] hostIP = 192.168.39.116
	podIP = 192.168.39.116
	I0603 11:09:08.743290       1 main.go:116] setting mtu 1500 for CNI 
	I0603 11:09:08.789618       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 11:09:08.789803       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 11:09:26.673473       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:29.745684       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:32.817482       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:35.889507       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:38.890591       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf] <==
	I0603 11:11:13.973943       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:11:23.980330       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:11:23.980424       1 main.go:227] handling current node
	I0603 11:11:23.980464       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:11:23.980482       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:11:23.980647       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:11:23.980689       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:23.980766       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:23.980786       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:11:34.002180       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:11:34.002382       1 main.go:227] handling current node
	I0603 11:11:34.002412       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:11:34.002499       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:11:34.002747       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:11:34.002849       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:34.003202       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:34.003343       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:11:44.009389       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:11:44.009474       1 main.go:227] handling current node
	I0603 11:11:44.009498       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:11:44.009515       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:11:44.009629       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:11:44.009649       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:44.009705       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:44.009721       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab] <==
	I0603 11:09:51.222575       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:09:51.242967       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 11:09:51.245229       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 11:09:51.308514       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 11:09:51.309089       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:09:51.309508       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 11:09:51.310384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:09:51.317848       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:09:51.317887       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:09:51.310656       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 11:09:51.319170       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:09:51.327394       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:09:51.327490       1 policy_source.go:224] refreshing policies
	I0603 11:09:51.345738       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:09:51.345832       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:09:51.345876       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:09:51.345901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:09:51.345924       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:09:51.416704       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0603 11:09:51.489637       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.131]
	I0603 11:09:51.491326       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 11:09:51.516079       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0603 11:09:51.532492       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0603 11:09:52.216826       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 11:09:52.659225       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.127]
	
	
	==> kube-apiserver [71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781] <==
	I0603 11:09:08.697716       1 options.go:221] external host was not specified, using 192.168.39.116
	I0603 11:09:08.701808       1 server.go:148] Version: v1.30.1
	I0603 11:09:08.701868       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:09.483085       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 11:09:09.490758       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 11:09:09.492882       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 11:09:09.493171       1 instance.go:299] Using reconciler: lease
	I0603 11:09:09.491507       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0603 11:09:29.473694       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0603 11:09:29.476181       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0603 11:09:29.494626       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276] <==
	I0603 11:09:09.767181       1 serving.go:380] Generated self-signed cert in-memory
	I0603 11:09:10.146137       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 11:09:10.146233       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:10.147699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 11:09:10.148507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:09:10.149450       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:09:10.149573       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0603 11:09:30.500169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.116:8443/healthz\": dial tcp 192.168.39.116:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94] <==
	I0603 11:10:04.127463       1 shared_informer.go:320] Caches are synced for taint
	I0603 11:10:04.127638       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 11:10:04.127771       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480"
	I0603 11:10:04.127826       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m02"
	I0603 11:10:04.127862       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m03"
	I0603 11:10:04.127901       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683480-m04"
	I0603 11:10:04.127951       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 11:10:04.190263       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 11:10:04.203208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 11:10:04.622808       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:10:04.622899       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 11:10:04.639863       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:10:09.363330       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4wld8\": the object has been modified; please apply your changes to the latest version and try again"
	I0603 11:10:09.363630       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d172cdeb-e0f6-4277-b7fa-80cd2362b9f8", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4wld8": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:10:09.380602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.218508ms"
	I0603 11:10:09.380702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.919µs"
	I0603 11:10:19.325404       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4wld8\": the object has been modified; please apply your changes to the latest version and try again"
	I0603 11:10:19.326951       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d172cdeb-e0f6-4277-b7fa-80cd2362b9f8", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4wld8": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:10:19.364352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.985424ms"
	I0603 11:10:19.364636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.067µs"
	I0603 11:10:33.060113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.018919ms"
	I0603 11:10:33.060499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.286µs"
	I0603 11:10:54.118786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.496µs"
	I0603 11:11:13.379155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.560208ms"
	I0603 11:11:13.379364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.972µs"
	
	
	==> kube-proxy [48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e] <==
	I0603 11:09:10.204904       1 server_linux.go:69] "Using iptables proxy"
	E0603 11:09:12.593756       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:15.666671       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:18.737530       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:24.882085       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:34.097467       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0603 11:09:52.259753       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	I0603 11:09:52.344108       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:09:52.345840       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:09:52.345932       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:09:52.355300       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:09:52.355568       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:09:52.355614       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:52.357919       1 config.go:319] "Starting node config controller"
	I0603 11:09:52.358047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:09:52.359612       1 config.go:192] "Starting service config controller"
	I0603 11:09:52.359646       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:09:52.359672       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:09:52.359677       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:09:52.459322       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:09:52.460480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:09:52.460706       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f] <==
	E0603 11:06:17.493811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.705898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.706300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.706388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.707491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.707719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.708260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:35.922579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:35.922652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:38.993673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:38.993906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:42.065889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:42.066121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.354677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.355222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:07:00.497526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:07:00.497586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d] <==
	W0603 11:09:46.420760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:46.420823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:46.962311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:46.962348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:47.179486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.116:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:47.179547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.116:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:47.918279       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:47.918373       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.211626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.211669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.808386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.116:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.808468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.116:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.895889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.896150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:51.258470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 11:09:51.258527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 11:09:51.258653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:09:51.258737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:09:51.258802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 11:09:51.258851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:09:51.258926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 11:09:51.258971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 11:09:51.260291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:09:51.260334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 11:10:05.413849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556] <==
	W0603 11:07:23.250863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:07:23.250961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:07:23.440507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:07:23.440556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:07:23.850822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:07:23.850873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:07:23.881110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 11:07:23.881156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:07:24.137270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:07:24.137360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:07:24.562679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 11:07:24.562728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 11:07:24.579219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.579267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.588574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:07:24.588662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:07:24.778888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.779044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.957511       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:07:24.957601       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:07:24.991872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:07:24.991960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 11:07:26.347620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0603 11:07:26.347788       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0603 11:07:26.347875       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 11:09:55 ha-683480 kubelet[1378]: I0603 11:09:55.047909    1378 scope.go:117] "RemoveContainer" containerID="d03163b21d0d6390b96b564b6bb3d2aa4eb9a463dbfd31ec0ecd1862107b9529"
	Jun 03 11:09:55 ha-683480 kubelet[1378]: I0603 11:09:55.048665    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:09:55 ha-683480 kubelet[1378]: E0603 11:09:55.048888    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:10:00 ha-683480 kubelet[1378]: E0603 11:10:00.114840    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:10:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:10:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:10:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:10:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:10:01 ha-683480 kubelet[1378]: I0603 11:10:01.087542    1378 scope.go:117] "RemoveContainer" containerID="52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1"
	Jun 03 11:10:01 ha-683480 kubelet[1378]: E0603 11:10:01.087772    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-zxhbp_kube-system(320e315b-e189-4358-9e56-a4be7d944fae)\"" pod="kube-system/kindnet-zxhbp" podUID="320e315b-e189-4358-9e56-a4be7d944fae"
	Jun 03 11:10:06 ha-683480 kubelet[1378]: I0603 11:10:06.088730    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:06 ha-683480 kubelet[1378]: E0603 11:10:06.089192    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:10:13 ha-683480 kubelet[1378]: I0603 11:10:13.088430    1378 scope.go:117] "RemoveContainer" containerID="52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1"
	Jun 03 11:10:20 ha-683480 kubelet[1378]: I0603 11:10:20.811779    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-mvpcm" podStartSLOduration=569.224060488 podStartE2EDuration="9m31.811702779s" podCreationTimestamp="2024-06-03 11:00:49 +0000 UTC" firstStartedPulling="2024-06-03 11:00:50.177711601 +0000 UTC m=+230.215066328" lastFinishedPulling="2024-06-03 11:00:52.765353892 +0000 UTC m=+232.802708619" observedRunningTime="2024-06-03 11:00:53.062056352 +0000 UTC m=+233.099411102" watchObservedRunningTime="2024-06-03 11:10:20.811702779 +0000 UTC m=+800.849057523"
	Jun 03 11:10:21 ha-683480 kubelet[1378]: I0603 11:10:21.088328    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:21 ha-683480 kubelet[1378]: E0603 11:10:21.088606    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:10:31 ha-683480 kubelet[1378]: I0603 11:10:31.087680    1378 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-683480" podUID="aa6a05c5-446e-4179-be45-0f8d33631c89"
	Jun 03 11:10:31 ha-683480 kubelet[1378]: I0603 11:10:31.106589    1378 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-683480"
	Jun 03 11:10:36 ha-683480 kubelet[1378]: I0603 11:10:36.088457    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:36 ha-683480 kubelet[1378]: I0603 11:10:36.324625    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-683480" podStartSLOduration=5.324601276 podStartE2EDuration="5.324601276s" podCreationTimestamp="2024-06-03 11:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 11:10:36.302224517 +0000 UTC m=+816.339579264" watchObservedRunningTime="2024-06-03 11:10:36.324601276 +0000 UTC m=+816.361956023"
	Jun 03 11:11:00 ha-683480 kubelet[1378]: E0603 11:11:00.113008    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:11:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:11:45.051776   33443 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19008-7755/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 node delete m03 -v=7 --alsologtostderr: (16.469762353s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 2 (590.119384ms)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:12:03.661805   33722 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:12:03.661892   33722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:12:03.661897   33722 out.go:304] Setting ErrFile to fd 2...
	I0603 11:12:03.661901   33722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:12:03.662386   33722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:12:03.662721   33722 out.go:298] Setting JSON to false
	I0603 11:12:03.662778   33722 mustload.go:65] Loading cluster: ha-683480
	I0603 11:12:03.662963   33722 notify.go:220] Checking for updates...
	I0603 11:12:03.664086   33722 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:12:03.664107   33722 status.go:255] checking status of ha-683480 ...
	I0603 11:12:03.664489   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.664526   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.680018   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0603 11:12:03.680415   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.680996   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.681026   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.681345   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.681538   33722 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:12:03.683754   33722 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:12:03.683768   33722 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:12:03.684045   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.684081   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.699026   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0603 11:12:03.699506   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.699945   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.699963   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.700289   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.700446   33722 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:12:03.702852   33722 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:12:03.703372   33722 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:12:03.703415   33722 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:12:03.703488   33722 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:12:03.703783   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.703821   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.718000   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0603 11:12:03.718318   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.718761   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.718780   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.719112   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.719266   33722 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:12:03.719465   33722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:12:03.719492   33722 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:12:03.722240   33722 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:12:03.722675   33722 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:12:03.722699   33722 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:12:03.722864   33722 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:12:03.723021   33722 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:12:03.723160   33722 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:12:03.723262   33722 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:12:03.806625   33722 ssh_runner.go:195] Run: systemctl --version
	I0603 11:12:03.816257   33722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:12:03.831894   33722 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:12:03.831930   33722 api_server.go:166] Checking apiserver status ...
	I0603 11:12:03.831983   33722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:12:03.845930   33722 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5184/cgroup
	W0603 11:12:03.854970   33722 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:12:03.855005   33722 ssh_runner.go:195] Run: ls
	I0603 11:12:03.859610   33722 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:12:03.865226   33722 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:12:03.865248   33722 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:12:03.865257   33722 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:12:03.865271   33722 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:12:03.865661   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.865708   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.880149   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0603 11:12:03.880499   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.880956   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.880976   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.881260   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.881430   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:12:03.883074   33722 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:12:03.883090   33722 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:12:03.883366   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.883395   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.897489   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0603 11:12:03.897953   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.898444   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.898468   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.898778   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.898969   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:12:03.901513   33722 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:03.901883   33722 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:09:12 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:12:03.901910   33722 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:03.902000   33722 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:12:03.902276   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:03.902305   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:03.916561   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0603 11:12:03.916890   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:03.917347   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:03.917368   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:03.917654   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:03.917852   33722 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:12:03.918044   33722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:12:03.918063   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:12:03.920672   33722 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:03.921061   33722 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:09:12 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:12:03.921097   33722 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:03.921211   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:12:03.921367   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:12:03.921521   33722 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:12:03.921627   33722 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 11:12:04.006764   33722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:12:04.023263   33722 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:12:04.023294   33722 api_server.go:166] Checking apiserver status ...
	I0603 11:12:04.023345   33722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:12:04.038533   33722 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0603 11:12:04.048479   33722 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:12:04.048555   33722 ssh_runner.go:195] Run: ls
	I0603 11:12:04.052966   33722 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:12:04.057317   33722 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 11:12:04.057341   33722 status.go:422] ha-683480-m02 apiserver status = Running (err=<nil>)
	I0603 11:12:04.057351   33722 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:12:04.057370   33722 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:12:04.057771   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:04.057819   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:04.072734   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0603 11:12:04.073129   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:04.073585   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:04.073604   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:04.073933   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:04.074131   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:12:04.075701   33722 status.go:330] ha-683480-m04 host status = "Running" (err=<nil>)
	I0603 11:12:04.075714   33722 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:12:04.075978   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:04.076013   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:04.090901   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0603 11:12:04.091311   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:04.091784   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:04.091805   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:04.092119   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:04.092294   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetIP
	I0603 11:12:04.095193   33722 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:04.095596   33722 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:11:36 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:12:04.095610   33722 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:04.095792   33722 host.go:66] Checking if "ha-683480-m04" exists ...
	I0603 11:12:04.096064   33722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:04.096120   33722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:04.110139   33722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0603 11:12:04.110502   33722 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:04.110926   33722 main.go:141] libmachine: Using API Version  1
	I0603 11:12:04.110944   33722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:04.111266   33722 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:04.111490   33722 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:12:04.111641   33722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:12:04.111656   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:12:04.114392   33722 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:04.114843   33722 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:11:36 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:12:04.114867   33722 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:04.115154   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:12:04.115291   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:12:04.115451   33722 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:12:04.115562   33722 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:12:04.194349   33722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:12:04.208169   33722 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 logs -n 25: (1.672374829s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m04 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp testdata/cp-test.txt                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m03 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683480 node stop m02 -v=7                                                     | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683480 node start m02 -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480 -v=7                                                           | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-683480 -v=7                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-683480 --wait=true -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:11 UTC |                     |
	| node    | ha-683480 node delete m03 -v=7                                                   | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:11 UTC | 03 Jun 24 11:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:07:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:07:25.442619   32123 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:07:25.442855   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.442863   32123 out.go:304] Setting ErrFile to fd 2...
	I0603 11:07:25.442866   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.443101   32123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:07:25.443633   32123 out.go:298] Setting JSON to false
	I0603 11:07:25.444536   32123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2990,"bootTime":1717409855,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:07:25.444597   32123 start.go:139] virtualization: kvm guest
	I0603 11:07:25.446966   32123 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:07:25.448223   32123 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:07:25.448228   32123 notify.go:220] Checking for updates...
	I0603 11:07:25.449410   32123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:07:25.450661   32123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:07:25.451979   32123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:07:25.453271   32123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:07:25.454412   32123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:07:25.456024   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:25.456119   32123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:07:25.456503   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.456543   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.477478   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0603 11:07:25.477915   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.478527   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.478546   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.478926   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.479145   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.513767   32123 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:07:25.515068   32123 start.go:297] selected driver: kvm2
	I0603 11:07:25.515093   32123 start.go:901] validating driver "kvm2" against &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.515277   32123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:07:25.515652   32123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.515720   32123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:07:25.531105   32123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:07:25.531742   32123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:07:25.531818   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:07:25.531832   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:07:25.531896   32123 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.532029   32123 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.534347   32123 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 11:07:25.535583   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:07:25.535617   32123 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:07:25.535624   32123 cache.go:56] Caching tarball of preloaded images
	I0603 11:07:25.535711   32123 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:07:25.535722   32123 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:07:25.535838   32123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:07:25.536024   32123 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:07:25.536061   32123 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-683480"
	I0603 11:07:25.536075   32123 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:07:25.536082   32123 fix.go:54] fixHost starting: 
	I0603 11:07:25.536327   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.536360   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.550171   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0603 11:07:25.550615   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.551053   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.551086   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.551439   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.551627   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.551779   32123 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:07:25.553075   32123 fix.go:112] recreateIfNeeded on ha-683480: state=Running err=<nil>
	W0603 11:07:25.553103   32123 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:07:25.555822   32123 out.go:177] * Updating the running kvm2 "ha-683480" VM ...
	I0603 11:07:25.557278   32123 machine.go:94] provisionDockerMachine start ...
	I0603 11:07:25.557297   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.557457   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.559729   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560164   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.560190   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.560397   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560552   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560663   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.560826   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.560998   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.561008   32123 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:07:25.664232   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.664262   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664503   32123 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 11:07:25.664525   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664710   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.667431   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667816   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.667840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667952   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.668123   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668269   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668398   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.668564   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.668736   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.668760   32123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 11:07:25.789898   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.789922   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.792463   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.792857   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.792879   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.793043   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.793241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793390   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793523   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.793674   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.793830   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.793845   32123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:07:25.895742   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:07:25.895783   32123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:07:25.895804   32123 buildroot.go:174] setting up certificates
	I0603 11:07:25.895816   32123 provision.go:84] configureAuth start
	I0603 11:07:25.895832   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.896116   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:07:25.898621   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.898971   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.898995   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.899148   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.901289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901702   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.901727   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901852   32123 provision.go:143] copyHostCerts
	I0603 11:07:25.901884   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.901920   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:07:25.901937   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.902006   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:07:25.902090   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902108   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:07:25.902113   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902139   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:07:25.902179   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902197   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:07:25.902206   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902235   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:07:25.902300   32123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 11:07:26.059416   32123 provision.go:177] copyRemoteCerts
	I0603 11:07:26.059473   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:07:26.059498   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.062155   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062608   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.062638   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062833   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.062994   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.063165   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.063290   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:07:26.146746   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:07:26.146810   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:07:26.174269   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:07:26.174353   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 11:07:26.199835   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:07:26.199895   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:07:26.226453   32123 provision.go:87] duration metric: took 330.620757ms to configureAuth
	I0603 11:07:26.226484   32123 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:07:26.226787   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:26.226897   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.229443   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.229819   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.229840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.230039   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.230233   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230407   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230524   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.230689   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:26.230900   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:26.230931   32123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:08:57.164538   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:08:57.164576   32123 machine.go:97] duration metric: took 1m31.607286329s to provisionDockerMachine
	I0603 11:08:57.164592   32123 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 11:08:57.164608   32123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:08:57.164635   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.165008   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:08:57.165037   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.168289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168694   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.168717   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168888   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.169136   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.169285   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.169407   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.251439   32123 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:08:57.255917   32123 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:08:57.255939   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:08:57.255991   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:08:57.256063   32123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:08:57.256072   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:08:57.256151   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:08:57.266429   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:08:57.290924   32123 start.go:296] duration metric: took 126.319085ms for postStartSetup
	I0603 11:08:57.290966   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.291281   32123 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 11:08:57.291304   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.293927   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294426   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.294457   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294611   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.294774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.294937   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.295094   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	W0603 11:08:57.373411   32123 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 11:08:57.373439   32123 fix.go:56] duration metric: took 1m31.837357572s for fixHost
	I0603 11:08:57.373460   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.375924   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376280   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.376299   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376459   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.376624   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376895   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.377010   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:08:57.377178   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:08:57.377187   32123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:08:57.476064   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412937.450872254
	
	I0603 11:08:57.476091   32123 fix.go:216] guest clock: 1717412937.450872254
	I0603 11:08:57.476097   32123 fix.go:229] Guest: 2024-06-03 11:08:57.450872254 +0000 UTC Remote: 2024-06-03 11:08:57.373446324 +0000 UTC m=+91.964564811 (delta=77.42593ms)
	I0603 11:08:57.476121   32123 fix.go:200] guest clock delta is within tolerance: 77.42593ms
	I0603 11:08:57.476126   32123 start.go:83] releasing machines lock for "ha-683480", held for 1m31.940055627s
	I0603 11:08:57.476143   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.476451   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:08:57.478829   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479315   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.479344   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479439   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480003   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480192   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480283   32123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:08:57.480338   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.480387   32123 ssh_runner.go:195] Run: cat /version.json
	I0603 11:08:57.480410   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.482838   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483029   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483284   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483308   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483544   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483621   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483692   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483755   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483826   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.483891   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.484014   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.483975   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.561311   32123 ssh_runner.go:195] Run: systemctl --version
	I0603 11:08:57.583380   32123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:08:57.752344   32123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:08:57.758604   32123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:08:57.758677   32123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:08:57.768166   32123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:08:57.768192   32123 start.go:494] detecting cgroup driver to use...
	I0603 11:08:57.768244   32123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:08:57.784730   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:08:57.799955   32123 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:08:57.800006   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:08:57.813623   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:08:57.851455   32123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:08:57.999998   32123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:08:58.161448   32123 docker.go:233] disabling docker service ...
	I0603 11:08:58.161527   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:08:58.178129   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:08:58.192081   32123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:08:58.341394   32123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:08:58.490223   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:08:58.504113   32123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:08:58.524449   32123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:08:58.524509   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.535157   32123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:08:58.535218   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.545448   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.556068   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.566406   32123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:08:58.577992   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.588771   32123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.599846   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.611253   32123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:08:58.621549   32123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:08:58.631028   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:08:58.773906   32123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:09:00.429585   32123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.655639068s)
	I0603 11:09:00.429609   32123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:09:00.429650   32123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:09:00.435134   32123 start.go:562] Will wait 60s for crictl version
	I0603 11:09:00.435178   32123 ssh_runner.go:195] Run: which crictl
	I0603 11:09:00.438893   32123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:09:00.479635   32123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:09:00.479716   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.508784   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.540764   32123 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:09:00.542271   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:09:00.544914   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545320   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:09:00.545352   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545521   32123 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:09:00.550299   32123 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:09:00.550441   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:09:00.550491   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.600204   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.600227   32123 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:09:00.600277   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.636579   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.636599   32123 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:09:00.636614   32123 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 11:09:00.636714   32123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:09:00.636779   32123 ssh_runner.go:195] Run: crio config
	I0603 11:09:00.686623   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:09:00.686644   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:09:00.686656   32123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:09:00.686688   32123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:09:00.686867   32123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:09:00.686895   32123 kube-vip.go:115] generating kube-vip config ...
	I0603 11:09:00.686945   32123 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:09:00.699149   32123 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:09:00.699266   32123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:09:00.699330   32123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:09:00.709452   32123 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:09:00.709523   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 11:09:00.719357   32123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 11:09:00.737341   32123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:09:00.753811   32123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 11:09:00.770330   32123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:09:00.788590   32123 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:09:00.792380   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:09:00.938633   32123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:09:00.954663   32123 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 11:09:00.954680   32123 certs.go:194] generating shared ca certs ...
	I0603 11:09:00.954695   32123 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:00.954853   32123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:09:00.954909   32123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:09:00.954920   32123 certs.go:256] generating profile certs ...
	I0603 11:09:00.954999   32123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:09:00.955025   32123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b
	I0603 11:09:00.955066   32123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:09:01.074478   32123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b ...
	I0603 11:09:01.074507   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b: {Name:mk90aaec59622d5605c25e50123cffa72ad4fa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074671   32123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b ...
	I0603 11:09:01.074682   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b: {Name:mke0afd6700871b17032b676d43a247d77a3697b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074747   32123 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:09:01.074893   32123 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:09:01.075011   32123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:09:01.075026   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:09:01.075095   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:09:01.075116   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:09:01.075128   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:09:01.075141   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:09:01.075153   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:09:01.075165   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:09:01.075177   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:09:01.075228   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:09:01.075265   32123 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:09:01.075274   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:09:01.075293   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:09:01.075314   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:09:01.075334   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:09:01.075369   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:09:01.075397   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.075412   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.075423   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.075983   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:09:01.101929   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:09:01.126780   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:09:01.151427   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:09:01.175069   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 11:09:01.198877   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:09:01.221819   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:09:01.245043   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:09:01.268520   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:09:01.292182   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:09:01.316481   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:09:01.340006   32123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:09:01.356593   32123 ssh_runner.go:195] Run: openssl version
	I0603 11:09:01.362366   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:09:01.373561   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.377979   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.378028   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.383817   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:09:01.393943   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:09:01.404966   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409235   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409284   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.414756   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:09:01.425087   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:09:01.436313   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441074   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441123   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.446671   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:09:01.456214   32123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:09:01.460571   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:09:01.466138   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:09:01.471498   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:09:01.476939   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:09:01.482385   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:09:01.487689   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:09:01.493220   32123 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:09:01.493322   32123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:09:01.493398   32123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:09:01.531471   32123 cri.go:89] found id: "f5e2a3e9cad2d3850b8c7cc462cbf093f62660cc5ed878de3fb697df8f7e849d"
	I0603 11:09:01.531494   32123 cri.go:89] found id: "0a2affa40fe5e43b29d1f89794f211acafce31faab220ad3254ea3ae9b81455e"
	I0603 11:09:01.531498   32123 cri.go:89] found id: "f1ac445f3c0b1f52f27caee3ee4ec90408d1b4670e8e93efdec8e3902e0de9b8"
	I0603 11:09:01.531500   32123 cri.go:89] found id: "9c8a6029966c17e71158a2045e39b094dfec93e361d3cd11049c550057d16295"
	I0603 11:09:01.531503   32123 cri.go:89] found id: "b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60"
	I0603 11:09:01.531507   32123 cri.go:89] found id: "fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668"
	I0603 11:09:01.531509   32123 cri.go:89] found id: "aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8"
	I0603 11:09:01.531512   32123 cri.go:89] found id: "995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b"
	I0603 11:09:01.531514   32123 cri.go:89] found id: "bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f"
	I0603 11:09:01.531520   32123 cri.go:89] found id: "2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda"
	I0603 11:09:01.531526   32123 cri.go:89] found id: "3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59"
	I0603 11:09:01.531530   32123 cri.go:89] found id: "c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556"
	I0603 11:09:01.531535   32123 cri.go:89] found id: "09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad"
	I0603 11:09:01.531539   32123 cri.go:89] found id: "200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79"
	I0603 11:09:01.531545   32123 cri.go:89] found id: ""
	I0603 11:09:01.531584   32123 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.799437666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413124799416645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42893b13-acd9-4fd2-9738-38ec46bb0b53 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.799897906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b34a48dd-22fe-4c69-8082-080c14eea5ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.800021171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b34a48dd-22fe-4c69-8082-080c14eea5ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.800420149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b34a48dd-22fe-4c69-8082-080c14eea5ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.849690414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cc7bfa7-c477-4319-b591-513a623abedc name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.849766275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cc7bfa7-c477-4319-b591-513a623abedc name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.851340153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f08522b-b8ce-4f75-ab27-2f3754b252c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.852056260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413124851958030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f08522b-b8ce-4f75-ab27-2f3754b252c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.853255008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5794cd4f-af57-4c56-bddf-e1f50817a634 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.853335924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5794cd4f-af57-4c56-bddf-e1f50817a634 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.853744858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5794cd4f-af57-4c56-bddf-e1f50817a634 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.896390569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=335d834f-c059-4288-b6af-20d072071b95 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.896462318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=335d834f-c059-4288-b6af-20d072071b95 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.897783066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13792227-a12e-468a-acb4-7b7277514277 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.898428503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413124898403442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13792227-a12e-468a-acb4-7b7277514277 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.899268059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3498fc0b-f265-49be-b7fd-29e6dd2c01e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.899324028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3498fc0b-f265-49be-b7fd-29e6dd2c01e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.899697774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3498fc0b-f265-49be-b7fd-29e6dd2c01e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.954248029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9863aad0-a3f0-45f1-9116-4621dd67e167 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.954316215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9863aad0-a3f0-45f1-9116-4621dd67e167 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.957358317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb9b6c43-b231-41a4-ab6a-3a22598e1818 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.958382291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413124958331726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb9b6c43-b231-41a4-ab6a-3a22598e1818 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.959055426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2afbc235-b771-408e-97a4-42543e8a0aa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.959139180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2afbc235-b771-408e-97a4-42543e8a0aa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:12:04 ha-683480 crio[3818]: time="2024-06-03 11:12:04.959515339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3a10717b253a18c95729306cc39b8b43d5e58a09150083085bb706877b41c41,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717413036108691580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717413013098589663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717412991110177163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717412989099617863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199
b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717412948220863604,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},An
notations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717412947983596549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 25a67648,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kuberne
tes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernete
s.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2afbc235-b771-408e-97a4-42543e8a0aa0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b3a10717b253a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   a113d054f5421       storage-provisioner
	c3ea180b82167       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   ffe70c296995b       kindnet-zxhbp
	4b0d6949ee1d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   a113d054f5421       storage-provisioner
	0376a5d0c8b82       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   eef7acb133025       kube-apiserver-ha-683480
	f11bba0fe671e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Running             kube-controller-manager   2                   1be973d393fd9       kube-controller-manager-ha-683480
	a2616ab08c12c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   43cc18e969581       busybox-fc5497c4f-mvpcm
	e6affd24ffc04       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   312ee2bc45a8a       kube-vip-ha-683480
	48e4f287c2039       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   0bb95efa9b554       kube-proxy-4d9w5
	753900b199b96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   1084ea2c9f83b       coredns-7db6d8ff4d-8tqf9
	cc8f63fef0029       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   7610af85710c6       coredns-7db6d8ff4d-nff86
	52b0704efa37c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   ffe70c296995b       kindnet-zxhbp
	127d736575af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   29eec1a82f9d9       etcd-ha-683480
	031c8a2316fc4       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   751825866bea3       kube-scheduler-ha-683480
	71115f2e0e5d4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   eef7acb133025       kube-apiserver-ha-683480
	9034d276d18e7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   1be973d393fd9       kube-controller-manager-ha-683480
	348419ceaffc3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   d32d79da82b93       busybox-fc5497c4f-mvpcm
	fdbecc258023e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   62bef471ea4a4       coredns-7db6d8ff4d-8tqf9
	aa5e3aca86502       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   41da25dac8c48       coredns-7db6d8ff4d-nff86
	bcb102231e3a6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      14 minutes ago       Exited              kube-proxy                0                   6812552c2a4ab       kube-proxy-4d9w5
	c282307764128       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      15 minutes ago       Exited              kube-scheduler            0                   860a510241592       kube-scheduler-ha-683480
	09fff5459f24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   86b1d4bcd541d       etcd-ha-683480
	
	
	==> coredns [753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5] <==
	Trace[1740610122]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46100->10.96.0.1:443: read: connection reset by peer 12670ms (11:09:32.817)
	Trace[1740610122]: [12.670863276s] [12.670863276s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46100->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8] <==
	[INFO] 10.244.1.2:59258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009417s
	[INFO] 10.244.0.4:59067 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001995491s
	[INFO] 10.244.0.4:33658 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077694s
	[INFO] 10.244.2.2:56134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146189s
	[INFO] 10.244.2.2:42897 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001874015s
	[INFO] 10.244.2.2:49555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079926s
	[INFO] 10.244.1.2:49977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098794s
	[INFO] 10.244.1.2:55522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070995s
	[INFO] 10.244.1.2:47166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064061s
	[INFO] 10.244.0.4:52772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107779s
	[INFO] 10.244.0.4:34695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110706s
	[INFO] 10.244.2.2:47248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010537s
	[INFO] 10.244.1.2:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175618s
	[INFO] 10.244.1.2:56731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211211s
	[INFO] 10.244.1.2:47156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137189s
	[INFO] 10.244.1.2:57441 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161046s
	[INFO] 10.244.0.4:45937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064288s
	[INFO] 10.244.0.4:50125 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003887s
	[INFO] 10.244.2.2:38937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134308s
	[INFO] 10.244.2.2:34039 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085147s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1995040395]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:09:22.492) (total time: 10325ms):
	Trace[1995040395]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer 10325ms (11:09:32.817)
	Trace[1995040395]: [10.325900852s] [10.325900852s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668] <==
	[INFO] 10.244.1.2:60397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013328418s
	[INFO] 10.244.1.2:34848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138348s
	[INFO] 10.244.0.4:53254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147619s
	[INFO] 10.244.0.4:37575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103362s
	[INFO] 10.244.0.4:54948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181862s
	[INFO] 10.244.0.4:39944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365258s
	[INFO] 10.244.0.4:55239 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017828s
	[INFO] 10.244.0.4:57467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097919s
	[INFO] 10.244.2.2:35971 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096406s
	[INFO] 10.244.2.2:38423 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334812s
	[INFO] 10.244.2.2:42352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153771s
	[INFO] 10.244.2.2:40734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099488s
	[INFO] 10.244.2.2:34598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136946s
	[INFO] 10.244.1.2:54219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087067s
	[INFO] 10.244.0.4:58452 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093948s
	[INFO] 10.244.0.4:35784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061499s
	[INFO] 10.244.2.2:54391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149082s
	[INFO] 10.244.2.2:39850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109311s
	[INFO] 10.244.2.2:39330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101321s
	[INFO] 10.244.0.4:56550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137331s
	[INFO] 10.244.0.4:42317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097716s
	[INFO] 10.244.2.2:34210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106975s
	[INFO] 10.244.2.2:40755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028708s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-683480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T10_57_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:12:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:09:48 +0000   Mon, 03 Jun 2024 10:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-683480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1505c2b59bc4afb8c36148f46c99e6c
	  System UUID:                f1505c2b-59bc-4afb-8c36-148f46c99e6c
	  Boot ID:                    acccd468-078d-403e-a5b4-d10d97594cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mvpcm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-8tqf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-nff86             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-683480                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-zxhbp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-683480             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-683480    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4d9w5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-683480             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-683480                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m13s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                  kubelet          Node ha-683480 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                  kubelet          Node ha-683480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                  kubelet          Node ha-683480 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   NodeReady                14m                  kubelet          Node ha-683480 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Warning  ContainerGCFailed        3m5s (x2 over 4m5s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m13s                node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           2m1s                 node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	  Normal   RegisteredNode           46s                  node-controller  Node ha-683480 event: Registered Node ha-683480 in Controller
	
	
	Name:               ha-683480-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T10_59_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:59:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:12:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:10:31 +0000   Mon, 03 Jun 2024 11:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-683480-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d1a1fca79484f629cf7b8fc1955281b
	  System UUID:                2d1a1fca-7948-4f62-9cf7-b8fc1955281b
	  Boot ID:                    2ccc0715-41df-4f70-950f-db6ed24fa46f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldtcf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-683480-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-t6fxj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-683480-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-683480-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-q2xfn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-683480-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-683480-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  NodeNotReady             9m28s                  node-controller  Node ha-683480-m02 status is now: NodeNotReady
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node ha-683480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node ha-683480-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m13s                  node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	  Normal  RegisteredNode           46s                    node-controller  Node ha-683480-m02 event: Registered Node ha-683480-m02 in Controller
	
	
	Name:               ha-683480-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683480-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-683480
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:01:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683480-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:01:55 +0000   Mon, 03 Jun 2024 11:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-683480-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0705544cf414e31abf26e0a013cd6bf
	  System UUID:                d0705544-cf41-4e31-abf2-6e0a013cd6bf
	  Boot ID:                    125ac719-6c97-4e76-9440-99e7f62b9e2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24p87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2kkf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-683480-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-683480-m04 status is now: NodeReady
	  Normal  RegisteredNode           2m13s              node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  RegisteredNode           2m1s               node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	  Normal  NodeNotReady             92s                node-controller  Node ha-683480-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           46s                node-controller  Node ha-683480-m04 event: Registered Node ha-683480-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.363785] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051848] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.189543] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.108878] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.262803] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077728] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +5.011635] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.054415] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.849379] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.148784] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[Jun 3 10:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057593] kauditd_printk_skb: 34 callbacks suppressed
	[Jun 3 10:59] kauditd_printk_skb: 30 callbacks suppressed
	[Jun 3 11:08] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +0.155437] systemd-fstab-generator[3744]: Ignoring "noauto" option for root device
	[  +0.187815] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.153201] systemd-fstab-generator[3770]: Ignoring "noauto" option for root device
	[  +0.283321] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[Jun 3 11:09] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +6.669814] kauditd_printk_skb: 122 callbacks suppressed
	[ +17.441291] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.224130] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.398569] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad] <==
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.361652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.180966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T11:07:26.361662Z","caller":"traceutil/trace.go:171","msg":"trace[1176087128] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"298.197396ms","start":"2024-06-03T11:07:26.063461Z","end":"2024-06-03T11:07:26.361659Z","steps":["trace[1176087128] 'agreement among raft nodes before linearized reading'  (duration: 298.187306ms)"],"step_count":1}
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.410189Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:07:26.410266Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:07:26.410368Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"8b2d6b6d639b2fdb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-03T11:07:26.410582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410665Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410722Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410891Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410925Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410958Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411127Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411224Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411251Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.414171Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.414318Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.41435Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-683480","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.116:2380"],"advertise-client-urls":["https://192.168.39.116:2379"]}
	
	
	==> etcd [127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e] <==
	{"level":"info","ts":"2024-06-03T11:11:02.728489Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:02.729503Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:02.739806Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:53990","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-03T11:11:02.740926Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:02.74806Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-03T11:11:02.751524Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.131:54014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-03T11:11:04.243889Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T11:11:04.259196Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4f87f407f126f7fc","rtt":"0s","error":"dial tcp 192.168.39.131:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-03T11:11:51.149595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb switched to configuration voters=(110010954725272782 10028790062790684635)"}
	{"level":"info","ts":"2024-06-03T11:11:51.152103Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"d52e949b9fea4da5","local-member-id":"8b2d6b6d639b2fdb","removed-remote-peer-id":"4f87f407f126f7fc","removed-remote-peer-urls":["https://192.168.39.131:2380"]}
	{"level":"info","ts":"2024-06-03T11:11:51.152216Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.152611Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:51.152663Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.153585Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:51.153639Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:51.153787Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.15436Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc","error":"context canceled"}
	{"level":"warn","ts":"2024-06-03T11:11:51.154476Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"4f87f407f126f7fc","error":"failed to read 4f87f407f126f7fc on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-03T11:11:51.154758Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.155423Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T11:11:51.155557Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:51.155677Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:11:51.155887Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"8b2d6b6d639b2fdb","removed-remote-peer-id":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.165501Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id-stream-handler":"8b2d6b6d639b2fdb","remote-peer-id-from":"4f87f407f126f7fc"}
	{"level":"warn","ts":"2024-06-03T11:11:51.175541Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id-stream-handler":"8b2d6b6d639b2fdb","remote-peer-id-from":"4f87f407f126f7fc"}
	
	
	==> kernel <==
	 11:12:05 up 15 min,  0 users,  load average: 0.15, 0.42, 0.26
	Linux ha-683480 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1] <==
	I0603 11:09:08.740706       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 11:09:08.743072       1 main.go:107] hostIP = 192.168.39.116
	podIP = 192.168.39.116
	I0603 11:09:08.743290       1 main.go:116] setting mtu 1500 for CNI 
	I0603 11:09:08.789618       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 11:09:08.789803       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 11:09:26.673473       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:29.745684       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:32.817482       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:35.889507       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:09:38.890591       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf] <==
	I0603 11:11:34.002849       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:34.003202       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:34.003343       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:11:44.009389       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:11:44.009474       1 main.go:227] handling current node
	I0603 11:11:44.009498       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:11:44.009515       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:11:44.009629       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:11:44.009649       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:44.009705       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:44.009721       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:11:54.019154       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:11:54.019261       1 main.go:227] handling current node
	I0603 11:11:54.019302       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:11:54.019320       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:11:54.019459       1 main.go:223] Handling node with IPs: map[192.168.39.131:{}]
	I0603 11:11:54.019480       1 main.go:250] Node ha-683480-m03 has CIDR [10.244.2.0/24] 
	I0603 11:11:54.019534       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:11:54.019556       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	I0603 11:12:04.034253       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0603 11:12:04.034347       1 main.go:227] handling current node
	I0603 11:12:04.034370       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0603 11:12:04.034387       1 main.go:250] Node ha-683480-m02 has CIDR [10.244.1.0/24] 
	I0603 11:12:04.034523       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I0603 11:12:04.034623       1 main.go:250] Node ha-683480-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0376a5d0c8b827cc48df7d87f5eb7cfc72a495c600abbb4856848908d605e8ab] <==
	I0603 11:09:51.222575       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:09:51.242967       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 11:09:51.245229       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 11:09:51.308514       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 11:09:51.309089       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:09:51.309508       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 11:09:51.310384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:09:51.317848       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:09:51.317887       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:09:51.310656       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 11:09:51.319170       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:09:51.327394       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:09:51.327490       1 policy_source.go:224] refreshing policies
	I0603 11:09:51.345738       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:09:51.345832       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:09:51.345876       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:09:51.345901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:09:51.345924       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:09:51.416704       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0603 11:09:51.489637       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.131]
	I0603 11:09:51.491326       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 11:09:51.516079       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0603 11:09:51.532492       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0603 11:09:52.216826       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 11:09:52.659225       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.127]
	
	
	==> kube-apiserver [71115f2e0e5d4fe5ae6de1e873cc6f52c55ff8c3b50d1e7576944491d0487781] <==
	I0603 11:09:08.697716       1 options.go:221] external host was not specified, using 192.168.39.116
	I0603 11:09:08.701808       1 server.go:148] Version: v1.30.1
	I0603 11:09:08.701868       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:09.483085       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 11:09:09.490758       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 11:09:09.492882       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 11:09:09.493171       1 instance.go:299] Using reconciler: lease
	I0603 11:09:09.491507       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0603 11:09:29.473694       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0603 11:09:29.476181       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0603 11:09:29.494626       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276] <==
	I0603 11:09:09.767181       1 serving.go:380] Generated self-signed cert in-memory
	I0603 11:09:10.146137       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 11:09:10.146233       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:10.147699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 11:09:10.148507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:09:10.149450       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:09:10.149573       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0603 11:09:30.500169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.116:8443/healthz\": dial tcp 192.168.39.116:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94] <==
	I0603 11:10:19.325404       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4wld8\": the object has been modified; please apply your changes to the latest version and try again"
	I0603 11:10:19.326951       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d172cdeb-e0f6-4277-b7fa-80cd2362b9f8", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4wld8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4wld8": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:10:19.364352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.985424ms"
	I0603 11:10:19.364636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.067µs"
	I0603 11:10:33.060113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.018919ms"
	I0603 11:10:33.060499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.286µs"
	I0603 11:10:54.118786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.496µs"
	I0603 11:11:13.379155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.560208ms"
	I0603 11:11:13.379364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.972µs"
	I0603 11:11:47.818552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.221956ms"
	I0603 11:11:47.860873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.112663ms"
	I0603 11:11:47.988807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.822193ms"
	I0603 11:11:48.009299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.429614ms"
	I0603 11:11:48.009425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.365µs"
	I0603 11:11:48.059585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.301615ms"
	I0603 11:11:48.059761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.905µs"
	I0603 11:11:50.018622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.626µs"
	I0603 11:11:50.491726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.79µs"
	I0603 11:11:50.525485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.741µs"
	I0603 11:11:50.529759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.71µs"
	E0603 11:12:04.025656       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:12:04.025759       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:12:04.025785       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:12:04.025809       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:12:04.025837       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	
	
	==> kube-proxy [48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e] <==
	I0603 11:09:10.204904       1 server_linux.go:69] "Using iptables proxy"
	E0603 11:09:12.593756       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:15.666671       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:18.737530       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:24.882085       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:34.097467       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0603 11:09:52.259753       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	I0603 11:09:52.344108       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:09:52.345840       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:09:52.345932       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:09:52.355300       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:09:52.355568       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:09:52.355614       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:52.357919       1 config.go:319] "Starting node config controller"
	I0603 11:09:52.358047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:09:52.359612       1 config.go:192] "Starting service config controller"
	I0603 11:09:52.359646       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:09:52.359672       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:09:52.359677       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:09:52.459322       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:09:52.460480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:09:52.460706       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f] <==
	E0603 11:06:17.493811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.705898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.706300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.706388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.707491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.707719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.708260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:35.922579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:35.922652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:38.993673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:38.993906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:42.065889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:42.066121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.354677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.355222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:07:00.497526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:07:00.497586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d] <==
	W0603 11:09:46.420760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:46.420823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:46.962311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:46.962348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:47.179486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.116:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:47.179547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.116:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:47.918279       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:47.918373       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.211626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.211669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.808386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.116:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.808468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.116:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:48.895889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:09:48.896150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:09:51.258470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 11:09:51.258527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 11:09:51.258653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:09:51.258737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:09:51.258802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 11:09:51.258851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:09:51.258926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 11:09:51.258971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 11:09:51.260291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:09:51.260334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 11:10:05.413849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556] <==
	W0603 11:07:23.250863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:07:23.250961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:07:23.440507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:07:23.440556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:07:23.850822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:07:23.850873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:07:23.881110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 11:07:23.881156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:07:24.137270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:07:24.137360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:07:24.562679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 11:07:24.562728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 11:07:24.579219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.579267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.588574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:07:24.588662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:07:24.778888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.779044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.957511       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:07:24.957601       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:07:24.991872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:07:24.991960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 11:07:26.347620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0603 11:07:26.347788       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0603 11:07:26.347875       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 11:10:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:10:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:10:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:10:01 ha-683480 kubelet[1378]: I0603 11:10:01.087542    1378 scope.go:117] "RemoveContainer" containerID="52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1"
	Jun 03 11:10:01 ha-683480 kubelet[1378]: E0603 11:10:01.087772    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-zxhbp_kube-system(320e315b-e189-4358-9e56-a4be7d944fae)\"" pod="kube-system/kindnet-zxhbp" podUID="320e315b-e189-4358-9e56-a4be7d944fae"
	Jun 03 11:10:06 ha-683480 kubelet[1378]: I0603 11:10:06.088730    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:06 ha-683480 kubelet[1378]: E0603 11:10:06.089192    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:10:13 ha-683480 kubelet[1378]: I0603 11:10:13.088430    1378 scope.go:117] "RemoveContainer" containerID="52b0704efa37cdba53b8de1a0dc7b7fec29ea28129c9a9e65bd213591e1c01c1"
	Jun 03 11:10:20 ha-683480 kubelet[1378]: I0603 11:10:20.811779    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-mvpcm" podStartSLOduration=569.224060488 podStartE2EDuration="9m31.811702779s" podCreationTimestamp="2024-06-03 11:00:49 +0000 UTC" firstStartedPulling="2024-06-03 11:00:50.177711601 +0000 UTC m=+230.215066328" lastFinishedPulling="2024-06-03 11:00:52.765353892 +0000 UTC m=+232.802708619" observedRunningTime="2024-06-03 11:00:53.062056352 +0000 UTC m=+233.099411102" watchObservedRunningTime="2024-06-03 11:10:20.811702779 +0000 UTC m=+800.849057523"
	Jun 03 11:10:21 ha-683480 kubelet[1378]: I0603 11:10:21.088328    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:21 ha-683480 kubelet[1378]: E0603 11:10:21.088606    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:10:31 ha-683480 kubelet[1378]: I0603 11:10:31.087680    1378 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-683480" podUID="aa6a05c5-446e-4179-be45-0f8d33631c89"
	Jun 03 11:10:31 ha-683480 kubelet[1378]: I0603 11:10:31.106589    1378 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-683480"
	Jun 03 11:10:36 ha-683480 kubelet[1378]: I0603 11:10:36.088457    1378 scope.go:117] "RemoveContainer" containerID="4b0d6949ee1d24934a07cf0a644346fca0258b096baf5ad06ca30011e7f39eb1"
	Jun 03 11:10:36 ha-683480 kubelet[1378]: I0603 11:10:36.324625    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-683480" podStartSLOduration=5.324601276 podStartE2EDuration="5.324601276s" podCreationTimestamp="2024-06-03 11:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 11:10:36.302224517 +0000 UTC m=+816.339579264" watchObservedRunningTime="2024-06-03 11:10:36.324601276 +0000 UTC m=+816.361956023"
	Jun 03 11:11:00 ha-683480 kubelet[1378]: E0603 11:11:00.113008    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:11:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:11:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:12:00 ha-683480 kubelet[1378]: E0603 11:12:00.111498    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:12:00 ha-683480 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:12:00 ha-683480 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:12:00 ha-683480 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:12:00 ha-683480 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:12:04.517770   33818 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19008-7755/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-65zlf
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-683480 describe pod busybox-fc5497c4f-65zlf
helpers_test.go:282: (dbg) kubectl --context ha-683480 describe pod busybox-fc5497c4f-65zlf:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-65zlf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqds8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qqds8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  16s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  16s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  16s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (19.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (172.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 stop -v=7 --alsologtostderr
E0603 11:12:12.037917   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 stop -v=7 --alsologtostderr: exit status 82 (2m1.679231365s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683480-m04"  ...
	* Stopping node "ha-683480-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:12:06.958135   33955 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:12:06.958380   33955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:12:06.958392   33955 out.go:304] Setting ErrFile to fd 2...
	I0603 11:12:06.958399   33955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:12:06.958572   33955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:12:06.958779   33955 out.go:298] Setting JSON to false
	I0603 11:12:06.958850   33955 mustload.go:65] Loading cluster: ha-683480
	I0603 11:12:06.959209   33955 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:12:06.959295   33955 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:12:06.959467   33955 mustload.go:65] Loading cluster: ha-683480
	I0603 11:12:06.959648   33955 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:12:06.959683   33955 stop.go:39] StopHost: ha-683480-m04
	I0603 11:12:06.960026   33955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:06.960069   33955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:06.975237   33955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0603 11:12:06.975688   33955 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:06.976276   33955 main.go:141] libmachine: Using API Version  1
	I0603 11:12:06.976301   33955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:06.976664   33955 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:06.978859   33955 out.go:177] * Stopping node "ha-683480-m04"  ...
	I0603 11:12:06.980297   33955 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:12:06.980334   33955 main.go:141] libmachine: (ha-683480-m04) Calling .DriverName
	I0603 11:12:06.980557   33955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:12:06.980585   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHHostname
	I0603 11:12:06.983614   33955 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:06.984000   33955 main.go:141] libmachine: (ha-683480-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:4a:53", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:11:36 +0000 UTC Type:0 Mac:52:54:00:ed:4a:53 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-683480-m04 Clientid:01:52:54:00:ed:4a:53}
	I0603 11:12:06.984031   33955 main.go:141] libmachine: (ha-683480-m04) DBG | domain ha-683480-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:ed:4a:53 in network mk-ha-683480
	I0603 11:12:06.984195   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHPort
	I0603 11:12:06.984378   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHKeyPath
	I0603 11:12:06.984532   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetSSHUsername
	I0603 11:12:06.984675   33955 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m04/id_rsa Username:docker}
	I0603 11:12:07.066898   33955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:12:07.121229   33955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	W0603 11:12:07.173287   33955 stop.go:55] failed to complete vm config backup (will continue): [failed to copy "/etc/kubernetes" to "/var/lib/minikube/backup" (will continue): sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup: Process exited with status 23
	stdout:
	
	stderr:
	rsync: [sender] link_stat "/etc/kubernetes" failed: No such file or directory (2)
	rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1336) [sender=3.2.7]
	]
	I0603 11:12:07.173321   33955 main.go:141] libmachine: Stopping "ha-683480-m04"...
	I0603 11:12:07.173341   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:12:07.174799   33955 main.go:141] libmachine: (ha-683480-m04) Calling .Stop
	I0603 11:12:07.178256   33955 main.go:141] libmachine: (ha-683480-m04) Waiting for machine to stop 0/120
	I0603 11:12:08.180093   33955 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:12:08.181453   33955 main.go:141] libmachine: Machine "ha-683480-m04" was stopped.
	I0603 11:12:08.181472   33955 stop.go:75] duration metric: took 1.201176625s to stop
	I0603 11:12:08.181506   33955 stop.go:39] StopHost: ha-683480-m02
	I0603 11:12:08.181861   33955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:12:08.181899   33955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:12:08.196640   33955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0603 11:12:08.197084   33955 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:12:08.197615   33955 main.go:141] libmachine: Using API Version  1
	I0603 11:12:08.197656   33955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:12:08.198025   33955 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:12:08.200444   33955 out.go:177] * Stopping node "ha-683480-m02"  ...
	I0603 11:12:08.201507   33955 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:12:08.201534   33955 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:12:08.201741   33955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:12:08.201762   33955 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:12:08.204442   33955 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:08.204825   33955 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:09:12 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:12:08.204857   33955 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:12:08.205017   33955 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:12:08.205169   33955 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:12:08.205306   33955 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:12:08.205465   33955 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	I0603 11:12:08.295317   33955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:12:08.348984   33955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:12:08.404365   33955 main.go:141] libmachine: Stopping "ha-683480-m02"...
	I0603 11:12:08.404387   33955 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:12:08.405852   33955 main.go:141] libmachine: (ha-683480-m02) Calling .Stop
	I0603 11:12:08.408894   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 0/120
	I0603 11:12:09.410229   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 1/120
	I0603 11:12:10.411680   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 2/120
	I0603 11:12:11.413095   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 3/120
	I0603 11:12:12.414491   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 4/120
	I0603 11:12:13.416391   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 5/120
	I0603 11:12:14.418816   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 6/120
	I0603 11:12:15.420027   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 7/120
	I0603 11:12:16.421479   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 8/120
	I0603 11:12:17.423078   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 9/120
	I0603 11:12:18.424934   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 10/120
	I0603 11:12:19.426342   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 11/120
	I0603 11:12:20.427636   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 12/120
	I0603 11:12:21.429063   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 13/120
	I0603 11:12:22.430695   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 14/120
	I0603 11:12:23.432191   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 15/120
	I0603 11:12:24.433770   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 16/120
	I0603 11:12:25.435485   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 17/120
	I0603 11:12:26.436948   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 18/120
	I0603 11:12:27.438178   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 19/120
	I0603 11:12:28.439961   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 20/120
	I0603 11:12:29.441690   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 21/120
	I0603 11:12:30.442940   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 22/120
	I0603 11:12:31.444780   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 23/120
	I0603 11:12:32.446298   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 24/120
	I0603 11:12:33.448452   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 25/120
	I0603 11:12:34.450022   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 26/120
	I0603 11:12:35.451474   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 27/120
	I0603 11:12:36.452927   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 28/120
	I0603 11:12:37.454330   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 29/120
	I0603 11:12:38.456507   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 30/120
	I0603 11:12:39.457754   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 31/120
	I0603 11:12:40.459124   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 32/120
	I0603 11:12:41.460391   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 33/120
	I0603 11:12:42.461806   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 34/120
	I0603 11:12:43.463440   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 35/120
	I0603 11:12:44.465416   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 36/120
	I0603 11:12:45.466714   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 37/120
	I0603 11:12:46.468776   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 38/120
	I0603 11:12:47.470322   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 39/120
	I0603 11:12:48.471911   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 40/120
	I0603 11:12:49.473462   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 41/120
	I0603 11:12:50.474764   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 42/120
	I0603 11:12:51.475890   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 43/120
	I0603 11:12:52.477153   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 44/120
	I0603 11:12:53.478821   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 45/120
	I0603 11:12:54.480686   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 46/120
	I0603 11:12:55.482010   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 47/120
	I0603 11:12:56.483230   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 48/120
	I0603 11:12:57.485415   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 49/120
	I0603 11:12:58.487320   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 50/120
	I0603 11:12:59.488577   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 51/120
	I0603 11:13:00.489818   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 52/120
	I0603 11:13:01.491092   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 53/120
	I0603 11:13:02.492384   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 54/120
	I0603 11:13:03.493999   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 55/120
	I0603 11:13:04.495147   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 56/120
	I0603 11:13:05.496401   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 57/120
	I0603 11:13:06.497610   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 58/120
	I0603 11:13:07.499087   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 59/120
	I0603 11:13:08.500337   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 60/120
	I0603 11:13:09.502167   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 61/120
	I0603 11:13:10.503610   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 62/120
	I0603 11:13:11.505500   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 63/120
	I0603 11:13:12.507148   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 64/120
	I0603 11:13:13.508967   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 65/120
	I0603 11:13:14.511022   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 66/120
	I0603 11:13:15.512422   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 67/120
	I0603 11:13:16.513926   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 68/120
	I0603 11:13:17.515217   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 69/120
	I0603 11:13:18.516844   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 70/120
	I0603 11:13:19.518119   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 71/120
	I0603 11:13:20.519619   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 72/120
	I0603 11:13:21.520869   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 73/120
	I0603 11:13:22.522244   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 74/120
	I0603 11:13:23.524269   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 75/120
	I0603 11:13:24.525596   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 76/120
	I0603 11:13:25.527090   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 77/120
	I0603 11:13:26.528459   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 78/120
	I0603 11:13:27.529733   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 79/120
	I0603 11:13:28.531376   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 80/120
	I0603 11:13:29.532632   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 81/120
	I0603 11:13:30.533847   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 82/120
	I0603 11:13:31.535291   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 83/120
	I0603 11:13:32.536649   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 84/120
	I0603 11:13:33.538029   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 85/120
	I0603 11:13:34.539217   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 86/120
	I0603 11:13:35.541488   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 87/120
	I0603 11:13:36.542622   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 88/120
	I0603 11:13:37.543980   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 89/120
	I0603 11:13:38.545674   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 90/120
	I0603 11:13:39.546801   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 91/120
	I0603 11:13:40.548239   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 92/120
	I0603 11:13:41.549553   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 93/120
	I0603 11:13:42.550829   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 94/120
	I0603 11:13:43.552127   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 95/120
	I0603 11:13:44.553556   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 96/120
	I0603 11:13:45.554929   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 97/120
	I0603 11:13:46.556446   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 98/120
	I0603 11:13:47.557870   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 99/120
	I0603 11:13:48.559742   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 100/120
	I0603 11:13:49.561745   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 101/120
	I0603 11:13:50.562991   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 102/120
	I0603 11:13:51.564151   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 103/120
	I0603 11:13:52.566031   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 104/120
	I0603 11:13:53.567917   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 105/120
	I0603 11:13:54.569025   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 106/120
	I0603 11:13:55.570371   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 107/120
	I0603 11:13:56.571670   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 108/120
	I0603 11:13:57.572925   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 109/120
	I0603 11:13:58.574499   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 110/120
	I0603 11:13:59.575808   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 111/120
	I0603 11:14:00.577095   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 112/120
	I0603 11:14:01.578463   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 113/120
	I0603 11:14:02.579820   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 114/120
	I0603 11:14:03.581398   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 115/120
	I0603 11:14:04.583554   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 116/120
	I0603 11:14:05.585444   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 117/120
	I0603 11:14:06.587378   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 118/120
	I0603 11:14:07.588969   33955 main.go:141] libmachine: (ha-683480-m02) Waiting for machine to stop 119/120
	I0603 11:14:08.589507   33955 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 11:14:08.589548   33955 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 11:14:08.591360   33955 out.go:177] 
	W0603 11:14:08.592650   33955 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 11:14:08.592667   33955 out.go:239] * 
	* 
	W0603 11:14:08.595091   33955 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 11:14:08.596328   33955 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-683480 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr: exit status 7 (33.516536177s)

                                                
                                                
-- stdout --
	ha-683480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-683480-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683480-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:14:08.639728   34445 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:14:08.639940   34445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:14:08.639948   34445 out.go:304] Setting ErrFile to fd 2...
	I0603 11:14:08.639952   34445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:14:08.640109   34445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:14:08.640260   34445 out.go:298] Setting JSON to false
	I0603 11:14:08.640282   34445 mustload.go:65] Loading cluster: ha-683480
	I0603 11:14:08.640392   34445 notify.go:220] Checking for updates...
	I0603 11:14:08.640698   34445 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:14:08.640713   34445 status.go:255] checking status of ha-683480 ...
	I0603 11:14:08.641165   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:08.641232   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:08.660291   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45023
	I0603 11:14:08.660654   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:08.661193   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:08.661223   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:08.661506   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:08.661684   34445 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:14:08.663177   34445 status.go:330] ha-683480 host status = "Running" (err=<nil>)
	I0603 11:14:08.663195   34445 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:14:08.663512   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:08.663548   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:08.678271   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I0603 11:14:08.678663   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:08.679127   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:08.679167   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:08.679466   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:08.679642   34445 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:14:08.682070   34445 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:14:08.682474   34445 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:14:08.682508   34445 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:14:08.682636   34445 host.go:66] Checking if "ha-683480" exists ...
	I0603 11:14:08.682996   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:08.683046   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:08.696952   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0603 11:14:08.697585   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:08.697977   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:08.697998   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:08.698307   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:08.698470   34445 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:14:08.698664   34445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:14:08.698692   34445 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:14:08.701404   34445 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:14:08.701772   34445 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:14:08.701807   34445 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:14:08.701930   34445 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:14:08.702091   34445 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:14:08.702225   34445 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:14:08.702328   34445 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:14:08.784243   34445 ssh_runner.go:195] Run: systemctl --version
	I0603 11:14:08.790873   34445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:14:08.805347   34445 kubeconfig.go:125] found "ha-683480" server: "https://192.168.39.254:8443"
	I0603 11:14:08.805375   34445 api_server.go:166] Checking apiserver status ...
	I0603 11:14:08.805402   34445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:14:08.820439   34445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6151/cgroup
	W0603 11:14:08.830739   34445 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:14:08.830789   34445 ssh_runner.go:195] Run: ls
	I0603 11:14:08.835227   34445 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:14:13.835997   34445 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0603 11:14:13.836041   34445 retry.go:31] will retry after 263.470564ms: state is "Stopped"
	I0603 11:14:14.100534   34445 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:14:19.101654   34445 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0603 11:14:19.101698   34445 retry.go:31] will retry after 375.591767ms: state is "Stopped"
	I0603 11:14:19.478242   34445 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:14:20.207387   34445 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0603 11:14:20.207440   34445 retry.go:31] will retry after 413.844315ms: state is "Stopped"
	I0603 11:14:20.622036   34445 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 11:14:23.695388   34445 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0603 11:14:23.695448   34445 status.go:422] ha-683480 apiserver status = Running (err=<nil>)
	I0603 11:14:23.695458   34445 status.go:257] ha-683480 status: &{Name:ha-683480 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:14:23.695482   34445 status.go:255] checking status of ha-683480-m02 ...
	I0603 11:14:23.695821   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:23.695865   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:23.710741   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0603 11:14:23.711224   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:23.711697   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:23.711715   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:23.711993   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:23.712162   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetState
	I0603 11:14:23.713558   34445 status.go:330] ha-683480-m02 host status = "Running" (err=<nil>)
	I0603 11:14:23.713574   34445 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:14:23.713858   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:23.713888   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:23.727746   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0603 11:14:23.728085   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:23.728518   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:23.728557   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:23.728842   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:23.729016   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetIP
	I0603 11:14:23.731673   34445 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:14:23.732087   34445 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:09:12 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:14:23.732109   34445 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:14:23.732281   34445 host.go:66] Checking if "ha-683480-m02" exists ...
	I0603 11:14:23.732554   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:23.732585   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:23.746699   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0603 11:14:23.747066   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:23.747491   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:23.747515   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:23.747807   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:23.747968   34445 main.go:141] libmachine: (ha-683480-m02) Calling .DriverName
	I0603 11:14:23.748130   34445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:14:23.748148   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHHostname
	I0603 11:14:23.750755   34445 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:14:23.751190   34445 main.go:141] libmachine: (ha-683480-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:55:50", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 12:09:12 +0000 UTC Type:0 Mac:52:54:00:00:55:50 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-683480-m02 Clientid:01:52:54:00:00:55:50}
	I0603 11:14:23.751215   34445 main.go:141] libmachine: (ha-683480-m02) DBG | domain ha-683480-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:00:55:50 in network mk-ha-683480
	I0603 11:14:23.751347   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHPort
	I0603 11:14:23.751479   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHKeyPath
	I0603 11:14:23.751590   34445 main.go:141] libmachine: (ha-683480-m02) Calling .GetSSHUsername
	I0603 11:14:23.751683   34445 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480-m02/id_rsa Username:docker}
	W0603 11:14:42.095230   34445 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.127:22: connect: no route to host
	W0603 11:14:42.095316   34445 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0603 11:14:42.095349   34445 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:14:42.095359   34445 status.go:257] ha-683480-m02 status: &{Name:ha-683480-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 11:14:42.095375   34445 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	I0603 11:14:42.095382   34445 status.go:255] checking status of ha-683480-m04 ...
	I0603 11:14:42.095666   34445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:14:42.095704   34445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:14:42.110250   34445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I0603 11:14:42.110733   34445 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:14:42.111233   34445 main.go:141] libmachine: Using API Version  1
	I0603 11:14:42.111253   34445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:14:42.111569   34445 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:14:42.111732   34445 main.go:141] libmachine: (ha-683480-m04) Calling .GetState
	I0603 11:14:42.113356   34445 status.go:330] ha-683480-m04 host status = "Stopped" (err=<nil>)
	I0603 11:14:42.113366   34445 status.go:343] host is not running, skipping remaining checks
	I0603 11:14:42.113372   34445 status.go:257] ha-683480-m04 status: &{Name:ha-683480-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr": ha-683480
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-683480-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-683480-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr": ha-683480
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-683480-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-683480-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr": ha-683480
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-683480-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-683480-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-683480 -n ha-683480: exit status 2 (15.595510952s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683480 logs -n 25: (1.339997512s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m04 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp testdata/cp-test.txt                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480 sudo cat                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m02 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n                                                                 | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | ha-683480-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683480 ssh -n ha-683480-m03 sudo cat                                          | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC | 03 Jun 24 11:01 UTC |
	|         | /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683480 node stop m02 -v=7                                                     | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683480 node start m02 -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480 -v=7                                                           | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-683480 -v=7                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-683480 --wait=true -v=7                                                    | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683480                                                                | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:11 UTC |                     |
	| node    | ha-683480 node delete m03 -v=7                                                   | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:11 UTC | 03 Jun 24 11:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-683480 stop -v=7                                                              | ha-683480 | jenkins | v1.33.1 | 03 Jun 24 11:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:07:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:07:25.442619   32123 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:07:25.442855   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.442863   32123 out.go:304] Setting ErrFile to fd 2...
	I0603 11:07:25.442866   32123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:07:25.443101   32123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:07:25.443633   32123 out.go:298] Setting JSON to false
	I0603 11:07:25.444536   32123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2990,"bootTime":1717409855,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:07:25.444597   32123 start.go:139] virtualization: kvm guest
	I0603 11:07:25.446966   32123 out.go:177] * [ha-683480] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:07:25.448223   32123 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:07:25.448228   32123 notify.go:220] Checking for updates...
	I0603 11:07:25.449410   32123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:07:25.450661   32123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:07:25.451979   32123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:07:25.453271   32123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:07:25.454412   32123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:07:25.456024   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:25.456119   32123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:07:25.456503   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.456543   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.477478   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0603 11:07:25.477915   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.478527   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.478546   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.478926   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.479145   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.513767   32123 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:07:25.515068   32123 start.go:297] selected driver: kvm2
	I0603 11:07:25.515093   32123 start.go:901] validating driver "kvm2" against &{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.515277   32123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:07:25.515652   32123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.515720   32123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:07:25.531105   32123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:07:25.531742   32123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:07:25.531818   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:07:25.531832   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:07:25.531896   32123 start.go:340] cluster config:
	{Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:07:25.532029   32123 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:07:25.534347   32123 out.go:177] * Starting "ha-683480" primary control-plane node in "ha-683480" cluster
	I0603 11:07:25.535583   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:07:25.535617   32123 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:07:25.535624   32123 cache.go:56] Caching tarball of preloaded images
	I0603 11:07:25.535711   32123 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:07:25.535722   32123 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:07:25.535838   32123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/config.json ...
	I0603 11:07:25.536024   32123 start.go:360] acquireMachinesLock for ha-683480: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:07:25.536061   32123 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-683480"
	I0603 11:07:25.536075   32123 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:07:25.536082   32123 fix.go:54] fixHost starting: 
	I0603 11:07:25.536327   32123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:07:25.536360   32123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:07:25.550171   32123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0603 11:07:25.550615   32123 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:07:25.551053   32123 main.go:141] libmachine: Using API Version  1
	I0603 11:07:25.551086   32123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:07:25.551439   32123 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:07:25.551627   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.551779   32123 main.go:141] libmachine: (ha-683480) Calling .GetState
	I0603 11:07:25.553075   32123 fix.go:112] recreateIfNeeded on ha-683480: state=Running err=<nil>
	W0603 11:07:25.553103   32123 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:07:25.555822   32123 out.go:177] * Updating the running kvm2 "ha-683480" VM ...
	I0603 11:07:25.557278   32123 machine.go:94] provisionDockerMachine start ...
	I0603 11:07:25.557297   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:07:25.557457   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.559729   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560164   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.560190   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.560241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.560397   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560552   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.560663   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.560826   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.560998   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.561008   32123 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:07:25.664232   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.664262   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664503   32123 buildroot.go:166] provisioning hostname "ha-683480"
	I0603 11:07:25.664525   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.664710   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.667431   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667816   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.667840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.667952   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.668123   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668269   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.668398   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.668564   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.668736   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.668760   32123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683480 && echo "ha-683480" | sudo tee /etc/hostname
	I0603 11:07:25.789898   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683480
	
	I0603 11:07:25.789922   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.792463   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.792857   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.792879   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.793043   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:25.793241   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793390   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:25.793523   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:25.793674   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:25.793830   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:25.793845   32123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683480/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:07:25.895742   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:07:25.895783   32123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:07:25.895804   32123 buildroot.go:174] setting up certificates
	I0603 11:07:25.895816   32123 provision.go:84] configureAuth start
	I0603 11:07:25.895832   32123 main.go:141] libmachine: (ha-683480) Calling .GetMachineName
	I0603 11:07:25.896116   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:07:25.898621   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.898971   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.898995   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.899148   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:25.901289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901702   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:25.901727   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:25.901852   32123 provision.go:143] copyHostCerts
	I0603 11:07:25.901884   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.901920   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:07:25.901937   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:07:25.902006   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:07:25.902090   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902108   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:07:25.902113   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:07:25.902139   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:07:25.902179   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902197   32123 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:07:25.902206   32123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:07:25.902235   32123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:07:25.902300   32123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.ha-683480 san=[127.0.0.1 192.168.39.116 ha-683480 localhost minikube]
	I0603 11:07:26.059416   32123 provision.go:177] copyRemoteCerts
	I0603 11:07:26.059473   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:07:26.059498   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.062155   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062608   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.062638   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.062833   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.062994   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.063165   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.063290   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:07:26.146746   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:07:26.146810   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:07:26.174269   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:07:26.174353   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 11:07:26.199835   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:07:26.199895   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:07:26.226453   32123 provision.go:87] duration metric: took 330.620757ms to configureAuth
	I0603 11:07:26.226484   32123 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:07:26.226787   32123 config.go:182] Loaded profile config "ha-683480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:07:26.226897   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:07:26.229443   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.229819   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:07:26.229840   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:07:26.230039   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:07:26.230233   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230407   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:07:26.230524   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:07:26.230689   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:07:26.230900   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:07:26.230931   32123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:08:57.164538   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:08:57.164576   32123 machine.go:97] duration metric: took 1m31.607286329s to provisionDockerMachine
	I0603 11:08:57.164592   32123 start.go:293] postStartSetup for "ha-683480" (driver="kvm2")
	I0603 11:08:57.164608   32123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:08:57.164635   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.165008   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:08:57.165037   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.168289   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168694   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.168717   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.168888   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.169136   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.169285   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.169407   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.251439   32123 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:08:57.255917   32123 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:08:57.255939   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:08:57.255991   32123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:08:57.256063   32123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:08:57.256072   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:08:57.256151   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:08:57.266429   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:08:57.290924   32123 start.go:296] duration metric: took 126.319085ms for postStartSetup
	I0603 11:08:57.290966   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.291281   32123 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 11:08:57.291304   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.293927   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294426   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.294457   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.294611   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.294774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.294937   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.295094   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	W0603 11:08:57.373411   32123 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 11:08:57.373439   32123 fix.go:56] duration metric: took 1m31.837357572s for fixHost
	I0603 11:08:57.373460   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.375924   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376280   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.376299   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.376459   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.376624   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376774   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.376895   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.377010   32123 main.go:141] libmachine: Using SSH client type: native
	I0603 11:08:57.377178   32123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0603 11:08:57.377187   32123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:08:57.476064   32123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412937.450872254
	
	I0603 11:08:57.476091   32123 fix.go:216] guest clock: 1717412937.450872254
	I0603 11:08:57.476097   32123 fix.go:229] Guest: 2024-06-03 11:08:57.450872254 +0000 UTC Remote: 2024-06-03 11:08:57.373446324 +0000 UTC m=+91.964564811 (delta=77.42593ms)
	I0603 11:08:57.476121   32123 fix.go:200] guest clock delta is within tolerance: 77.42593ms
	I0603 11:08:57.476126   32123 start.go:83] releasing machines lock for "ha-683480", held for 1m31.940055627s
	I0603 11:08:57.476143   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.476451   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:08:57.478829   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479315   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.479344   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.479439   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480003   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480192   32123 main.go:141] libmachine: (ha-683480) Calling .DriverName
	I0603 11:08:57.480283   32123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:08:57.480338   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.480387   32123 ssh_runner.go:195] Run: cat /version.json
	I0603 11:08:57.480410   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHHostname
	I0603 11:08:57.482838   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483029   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483284   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483308   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483488   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:08:57.483544   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:08:57.483621   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHPort
	I0603 11:08:57.483692   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483755   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHKeyPath
	I0603 11:08:57.483826   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.483891   32123 main.go:141] libmachine: (ha-683480) Calling .GetSSHUsername
	I0603 11:08:57.484014   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.483975   32123 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/ha-683480/id_rsa Username:docker}
	I0603 11:08:57.561311   32123 ssh_runner.go:195] Run: systemctl --version
	I0603 11:08:57.583380   32123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:08:57.752344   32123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:08:57.758604   32123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:08:57.758677   32123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:08:57.768166   32123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:08:57.768192   32123 start.go:494] detecting cgroup driver to use...
	I0603 11:08:57.768244   32123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:08:57.784730   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:08:57.799955   32123 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:08:57.800006   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:08:57.813623   32123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:08:57.851455   32123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:08:57.999998   32123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:08:58.161448   32123 docker.go:233] disabling docker service ...
	I0603 11:08:58.161527   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:08:58.178129   32123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:08:58.192081   32123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:08:58.341394   32123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:08:58.490223   32123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:08:58.504113   32123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:08:58.524449   32123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:08:58.524509   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.535157   32123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:08:58.535218   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.545448   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.556068   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.566406   32123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:08:58.577992   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.588771   32123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.599846   32123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:08:58.611253   32123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:08:58.621549   32123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:08:58.631028   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:08:58.773906   32123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:09:00.429585   32123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.655639068s)
	I0603 11:09:00.429609   32123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:09:00.429650   32123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:09:00.435134   32123 start.go:562] Will wait 60s for crictl version
	I0603 11:09:00.435178   32123 ssh_runner.go:195] Run: which crictl
	I0603 11:09:00.438893   32123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:09:00.479635   32123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:09:00.479716   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.508784   32123 ssh_runner.go:195] Run: crio --version
	I0603 11:09:00.540764   32123 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:09:00.542271   32123 main.go:141] libmachine: (ha-683480) Calling .GetIP
	I0603 11:09:00.544914   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545320   32123 main.go:141] libmachine: (ha-683480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:3f:6a", ip: ""} in network mk-ha-683480: {Iface:virbr1 ExpiryTime:2024-06-03 11:56:28 +0000 UTC Type:0 Mac:52:54:00:e5:3f:6a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-683480 Clientid:01:52:54:00:e5:3f:6a}
	I0603 11:09:00.545352   32123 main.go:141] libmachine: (ha-683480) DBG | domain ha-683480 has defined IP address 192.168.39.116 and MAC address 52:54:00:e5:3f:6a in network mk-ha-683480
	I0603 11:09:00.545521   32123 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:09:00.550299   32123 kubeadm.go:877] updating cluster {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:09:00.550441   32123 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:09:00.550491   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.600204   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.600227   32123 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:09:00.600277   32123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:09:00.636579   32123 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:09:00.636599   32123 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:09:00.636614   32123 kubeadm.go:928] updating node { 192.168.39.116 8443 v1.30.1 crio true true} ...
	I0603 11:09:00.636714   32123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:09:00.636779   32123 ssh_runner.go:195] Run: crio config
	I0603 11:09:00.686623   32123 cni.go:84] Creating CNI manager for ""
	I0603 11:09:00.686644   32123 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 11:09:00.686656   32123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:09:00.686688   32123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683480 NodeName:ha-683480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:09:00.686867   32123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683480"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:09:00.686895   32123 kube-vip.go:115] generating kube-vip config ...
	I0603 11:09:00.686945   32123 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 11:09:00.699149   32123 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 11:09:00.699266   32123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 11:09:00.699330   32123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:09:00.709452   32123 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:09:00.709523   32123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 11:09:00.719357   32123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 11:09:00.737341   32123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:09:00.753811   32123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0603 11:09:00.770330   32123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 11:09:00.788590   32123 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 11:09:00.792380   32123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:09:00.938633   32123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:09:00.954663   32123 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480 for IP: 192.168.39.116
	I0603 11:09:00.954680   32123 certs.go:194] generating shared ca certs ...
	I0603 11:09:00.954695   32123 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:00.954853   32123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:09:00.954909   32123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:09:00.954920   32123 certs.go:256] generating profile certs ...
	I0603 11:09:00.954999   32123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/client.key
	I0603 11:09:00.955025   32123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b
	I0603 11:09:00.955066   32123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116 192.168.39.127 192.168.39.131 192.168.39.254]
	I0603 11:09:01.074478   32123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b ...
	I0603 11:09:01.074507   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b: {Name:mk90aaec59622d5605c25e50123cffa72ad4fa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074671   32123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b ...
	I0603 11:09:01.074682   32123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b: {Name:mke0afd6700871b17032b676d43a247d77a3697b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:09:01.074747   32123 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt
	I0603 11:09:01.074893   32123 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key.e3f31f3b -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key
	I0603 11:09:01.075011   32123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key
	I0603 11:09:01.075026   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:09:01.075095   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:09:01.075116   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:09:01.075128   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:09:01.075141   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:09:01.075153   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:09:01.075165   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:09:01.075177   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:09:01.075228   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:09:01.075265   32123 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:09:01.075274   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:09:01.075293   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:09:01.075314   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:09:01.075334   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:09:01.075369   32123 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:09:01.075397   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.075412   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.075423   32123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.075983   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:09:01.101929   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:09:01.126780   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:09:01.151427   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:09:01.175069   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 11:09:01.198877   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 11:09:01.221819   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:09:01.245043   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/ha-683480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 11:09:01.268520   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:09:01.292182   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:09:01.316481   32123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:09:01.340006   32123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:09:01.356593   32123 ssh_runner.go:195] Run: openssl version
	I0603 11:09:01.362366   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:09:01.373561   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.377979   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.378028   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:09:01.383817   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:09:01.393943   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:09:01.404966   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409235   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.409284   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:09:01.414756   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:09:01.425087   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:09:01.436313   32123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441074   32123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.441123   32123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:09:01.446671   32123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:09:01.456214   32123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:09:01.460571   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:09:01.466138   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:09:01.471498   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:09:01.476939   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:09:01.482385   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:09:01.487689   32123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:09:01.493220   32123 kubeadm.go:391] StartCluster: {Name:ha-683480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-683480 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.131 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.206 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:09:01.493322   32123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:09:01.493398   32123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:09:01.531471   32123 cri.go:89] found id: "f5e2a3e9cad2d3850b8c7cc462cbf093f62660cc5ed878de3fb697df8f7e849d"
	I0603 11:09:01.531494   32123 cri.go:89] found id: "0a2affa40fe5e43b29d1f89794f211acafce31faab220ad3254ea3ae9b81455e"
	I0603 11:09:01.531498   32123 cri.go:89] found id: "f1ac445f3c0b1f52f27caee3ee4ec90408d1b4670e8e93efdec8e3902e0de9b8"
	I0603 11:09:01.531500   32123 cri.go:89] found id: "9c8a6029966c17e71158a2045e39b094dfec93e361d3cd11049c550057d16295"
	I0603 11:09:01.531503   32123 cri.go:89] found id: "b5e9b65b02107aa343d9bd2938c82d12641166c15c0364265fb74b1a00b58a60"
	I0603 11:09:01.531507   32123 cri.go:89] found id: "fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668"
	I0603 11:09:01.531509   32123 cri.go:89] found id: "aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8"
	I0603 11:09:01.531512   32123 cri.go:89] found id: "995fa288cd9162aa7fa350ae7a02800593a524c7300a6fa984b62ba4b928891b"
	I0603 11:09:01.531514   32123 cri.go:89] found id: "bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f"
	I0603 11:09:01.531520   32123 cri.go:89] found id: "2542929b8eaa1ecd8c858dbb7e4812ddb5121109c3c92127fa7eaae86849ebda"
	I0603 11:09:01.531526   32123 cri.go:89] found id: "3e27550ee88e8dcb6316daece49f9840028efa3091db03e5549e1e3dbbd8ad59"
	I0603 11:09:01.531530   32123 cri.go:89] found id: "c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556"
	I0603 11:09:01.531535   32123 cri.go:89] found id: "09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad"
	I0603 11:09:01.531539   32123 cri.go:89] found id: "200682c1dc43f01036807986e0c3bfe0b422726ec352be0df5e42fa79426ed79"
	I0603 11:09:01.531545   32123 cri.go:89] found id: ""
	I0603 11:09:01.531584   32123 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.089397343Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mvpcm,Uid:fe7a8238-754b-43ce-8080-48e39c548383,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412981293460108,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:00:49.536740556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-683480,Uid:88446bc5037aec3d04a64b1cd4a0b0bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1717412964891783122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{kubernetes.io/config.hash: 88446bc5037aec3d04a64b1cd4a0b0bb,kubernetes.io/config.seen: 2024-06-03T11:09:00.762455518Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-683480,Uid:e56ceee947d891ba1cd0986590072af7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947576432699,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e56ceee947d891ba1
cd0986590072af7,kubernetes.io/config.seen: 2024-06-03T10:57:00.046934857Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a410a98d-73a7-434b-88ce-575c300b2807,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947565312038,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":
[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T10:57:18.992761283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nff86,Uid:02320e91-17ab-4120-b8b9-dcc08234f180,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947560808435,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,
},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T10:57:18.990931427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&PodSandboxMetadata{Name:kube-proxy-4d9w5,Uid:708e060d-115a-4b74-bc66-138d62796b50,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947557472430,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T10:57:13.277497421Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-8tqf9,Uid:8eab910a-98ed-43db-ac16-d53beb6b7ee4,Namespace:kube-sy
stem,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947550149249,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T10:57:18.985149008Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&PodSandboxMetadata{Name:etcd-ha-683480,Uid:36cfff3e1576ec0ef9aa4746d32a32e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947523609042,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-
urls: https://192.168.39.116:2379,kubernetes.io/config.hash: 36cfff3e1576ec0ef9aa4746d32a32e3,kubernetes.io/config.seen: 2024-06-03T10:57:00.046939723Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-683480,Uid:b448fd1c84d729fa6b033c44220aea0b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947518437348,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.116:8443,kubernetes.io/config.hash: b448fd1c84d729fa6b033c44220aea0b,kubernetes.io/config.seen: 2024-06-03T10:57:00.046940569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffe
70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&PodSandboxMetadata{Name:kindnet-zxhbp,Uid:320e315b-e189-4358-9e56-a4be7d944fae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947508458076,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T10:57:13.283276995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-683480,Uid:003e33f91c92b780f1d2cb57410c03e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717412947501715413,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 003e33f91c92b780f1d2cb57410c03e9,kubernetes.io/config.seen: 2024-06-03T10:57:00.046941531Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=43eb4f85-5473-4ada-ba19-12f091c6146c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.090205501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99fa9ce2-b715-494d-a3e3-cbd49d8c2ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.090289258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99fa9ce2-b715-494d-a3e3-cbd49d8c2ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.090464931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash:
9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99fa9ce2-b715-494d-a3e3-cbd49d8c2ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.099211127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9fe987d-cfd0-4b3a-ad77-f64df3d211ca name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.099286990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9fe987d-cfd0-4b3a-ad77-f64df3d211ca name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.100242273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e233130-de50-437a-82ee-7f7d9529396a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.100671018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413298100650757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e233130-de50-437a-82ee-7f7d9529396a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.101309223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e08e7124-3a38-40e7-8e15-b081ae292aca name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.101381444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e08e7124-3a38-40e7-8e15-b081ae292aca name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.101730651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d1ac2921b8b2d8f877d5a779925f18de053d8e4b9a00c1636fd342ff8281f59,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413255098574528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413231470596571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5e3d5bc9fc79b80086f615283ff566f7ed37106ad7b4da30b519ce27777dce,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413227104552044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d8
91ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f
91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b
-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e08e7124-3a38-40e7-8e15-b081ae292aca name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.141922302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71b4d1ed-ef92-43cf-980a-e6c222462a26 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.142113476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71b4d1ed-ef92-43cf-980a-e6c222462a26 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.143387351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa1b05ba-5d8a-4795-a43a-68edc55fb8ca name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.143847181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413298143825287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa1b05ba-5d8a-4795-a43a-68edc55fb8ca name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.144647008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d52aab5d-8031-4a94-b764-bedbf335d869 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.144702811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d52aab5d-8031-4a94-b764-bedbf335d869 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.145098808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d1ac2921b8b2d8f877d5a779925f18de053d8e4b9a00c1636fd342ff8281f59,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413255098574528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413231470596571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5e3d5bc9fc79b80086f615283ff566f7ed37106ad7b4da30b519ce27777dce,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413227104552044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d8
91ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f
91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b
-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d52aab5d-8031-4a94-b764-bedbf335d869 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.191346015Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b44d0ef-f985-4540-8bac-2ecdc31f4cd1 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.191432445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b44d0ef-f985-4540-8bac-2ecdc31f4cd1 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.192434769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fa2ebc8-08ec-48b5-a40b-b19651506b24 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.192881840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717413298192858314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fa2ebc8-08ec-48b5-a40b-b19651506b24 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.193460132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08e64990-3a95-4ae3-82f1-dd73b1f2a673 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.193542455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08e64990-3a95-4ae3-82f1-dd73b1f2a673 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:14:58 ha-683480 crio[3818]: time="2024-06-03 11:14:58.193885997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d1ac2921b8b2d8f877d5a779925f18de053d8e4b9a00c1636fd342ff8281f59,PodSandboxId:ffe70c296995b94eea8e0ed4d7be6d69bf08d786f79d2409eb0aec4cec543072,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413255098574528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zxhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320e315b-e189-4358-9e56-a4be7d944fae,},Annotations:map[string]string{io.kubernetes.container.hash: ae8d6a68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397,PodSandboxId:eef7acb133025c2540d90e56f987f803816220c3954ca2f0a137257b3822879b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413231470596571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b448fd1c84d729fa6b033c44220aea0b,},Annotations:map[string]string{io.kubernetes.container.hash: 25a67648,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5e3d5bc9fc79b80086f615283ff566f7ed37106ad7b4da30b519ce27777dce,PodSandboxId:a113d054f5421f66107af14bfae1a5eebde08aa9dc9aeb335f0c95161f05eb06,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413227104552044,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a410a98d-73a7-434b-88ce-575c300b2807,},Annotations:map[string]string{io.kubernetes.container.hash: c0c86aa,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717412987097933728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2616ab08c12cc3bf8a5ddb38992b52223cc3d7951ba7e34b77270f74109b379,PodSandboxId:43cc18e9695818b679a9094e9daaec11df83ee3c5be09797eb2bce64e1b7714f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717412981419893724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6affd24ffc04f8e73646185baadbdcfadc4f59260fe0de2fcfc6b6c24c95576,PodSandboxId:312ee2bc45a8ad5b63be398920344737c48d32822e4acdfcb5242106eebd2f06,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717412964984575127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88446bc5037aec3d04a64b1cd4a0b0bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e,PodSandboxId:0bb95efa9b5544806ce77cb38d2d1899f8a064362bc1a9d4019a150e391a9512,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717412948822408756,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5,PodSandboxId:1084ea2c9f83b50b855a9d1cebe8088d5c3ac92954ad88b1defd656231520b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948359360909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a,PodSandboxId:7610af85710c6617d550044fd9363c3da2fbbbe3d710d6bc8d401d9687a379cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717412948351443976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e,PodSandboxId:29eec1a82f9d96bfac4a182301c8302309c6d8392823083237c2d90fca41fa5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717412948119054849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d,PodSandboxId:751825866bea37dd36dd4139ef61da30fa14d3c0c98e6184cb852519708eec00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717412948107489532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d8
91ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276,PodSandboxId:1be973d393fd98b3b25957a69bb1d222efeb5fee521136d8aee5fcb9c38f29b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717412947915313005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 003e33f
91c92b780f1d2cb57410c03e9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:348419ceaffc348fe3779838e8b27e8baa3aa566be3f4c329aea8b701917349c,PodSandboxId:d32d79da82b93361a47376b8d8beec88e0c5d9097ed7a7450c63de0ee96d230f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717412452793948202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mvpcm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe7a8238-754b
-43ce-8080-48e39c548383,},Annotations:map[string]string{io.kubernetes.container.hash: 17542a28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668,PodSandboxId:62bef471ea4a403424478ea00a89f4311f3d11aea1fc0301abe18ddf44455091,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239551956082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8tqf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eab910a-98ed-43db-ac16-d53beb6b7ee4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 38c633a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8,PodSandboxId:41da25dac8c4818183c067f43713ee94cebef64eab1ffb890510822bc9712a41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717412239525874687,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nff86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02320e91-17ab-4120-b8b9-dcc08234f180,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfd16ce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f,PodSandboxId:6812552c2a4ab53e39123a83312dfad25c506cf5157864aa7732c91d6b7eebf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717412233855131979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9w5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708e060d-115a-4b74-bc66-138d62796b50,},Annotations:map[string]string{io.kubernetes.container.hash: 4749de6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556,PodSandboxId:860a510241592c9daa1fd1d8b28ba6314d6102372dd3005ee2f1fc332eaa5fbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717412213949425877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56ceee947d891ba1cd0986590072af7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad,PodSandboxId:86b1d4bcd541d31a17ad320bdd376b8fc84deff2fe6e38053aa471139f753d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717412213926445790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cfff3e1576ec0ef9aa4746d32a32e3,},Annotations:map[string]string{io.kubernetes.container.hash: 40554970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08e64990-3a95-4ae3-82f1-dd73b1f2a673 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7d1ac2921b8b2       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      43 seconds ago       Exited              kindnet-cni               4                   ffe70c296995b       kindnet-zxhbp
	5c7cd9a228925       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Exited              kube-apiserver            4                   eef7acb133025       kube-apiserver-ha-683480
	6e5e3d5bc9fc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   a113d054f5421       storage-provisioner
	f11bba0fe671e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago        Running             kube-controller-manager   2                   1be973d393fd9       kube-controller-manager-ha-683480
	a2616ab08c12c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago        Running             busybox                   1                   43cc18e969581       busybox-fc5497c4f-mvpcm
	e6affd24ffc04       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago        Running             kube-vip                  0                   312ee2bc45a8a       kube-vip-ha-683480
	48e4f287c2039       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago        Running             kube-proxy                1                   0bb95efa9b554       kube-proxy-4d9w5
	753900b199b96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago        Running             coredns                   1                   1084ea2c9f83b       coredns-7db6d8ff4d-8tqf9
	cc8f63fef0029       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago        Running             coredns                   1                   7610af85710c6       coredns-7db6d8ff4d-nff86
	127d736575af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago        Running             etcd                      1                   29eec1a82f9d9       etcd-ha-683480
	031c8a2316fc4       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago        Running             kube-scheduler            1                   751825866bea3       kube-scheduler-ha-683480
	9034d276d18e7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago        Exited              kube-controller-manager   1                   1be973d393fd9       kube-controller-manager-ha-683480
	348419ceaffc3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago       Exited              busybox                   0                   d32d79da82b93       busybox-fc5497c4f-mvpcm
	fdbecc258023e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago       Exited              coredns                   0                   62bef471ea4a4       coredns-7db6d8ff4d-8tqf9
	aa5e3aca86502       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago       Exited              coredns                   0                   41da25dac8c48       coredns-7db6d8ff4d-nff86
	bcb102231e3a6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      17 minutes ago       Exited              kube-proxy                0                   6812552c2a4ab       kube-proxy-4d9w5
	c282307764128       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      18 minutes ago       Exited              kube-scheduler            0                   860a510241592       kube-scheduler-ha-683480
	09fff5459f24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      18 minutes ago       Exited              etcd                      0                   86b1d4bcd541d       etcd-ha-683480
	
	
	==> coredns [753900b199b96cc9a3ae3791ff1c0c8a47f296f8db9da5deb7568cecb0e3bce5] <==
	Trace[1030848860]: ---"Objects listed" error:Unauthorized 13103ms (11:14:21.925)
	Trace[1030848860]: [13.103700386s] [13.103700386s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2559": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2559": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2578": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2578": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2590": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2590": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[834945017]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:14:31.339) (total time: 11607ms):
	Trace[834945017]: ---"Objects listed" error:Unauthorized 11607ms (11:14:42.946)
	Trace[834945017]: [11.607969122s] [11.607969122s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[544744643]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:14:31.291) (total time: 11657ms):
	Trace[544744643]: ---"Objects listed" error:Unauthorized 11656ms (11:14:42.948)
	Trace[544744643]: [11.657114016s] [11.657114016s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2590": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2590": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2559": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2559": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2578": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2578": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [aa5e3aca86502907c8d16e6a2327b8f4298b6076617819ceed2b250ae9b24fe8] <==
	[INFO] 10.244.1.2:59258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009417s
	[INFO] 10.244.0.4:59067 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001995491s
	[INFO] 10.244.0.4:33658 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077694s
	[INFO] 10.244.2.2:56134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146189s
	[INFO] 10.244.2.2:42897 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001874015s
	[INFO] 10.244.2.2:49555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079926s
	[INFO] 10.244.1.2:49977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098794s
	[INFO] 10.244.1.2:55522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070995s
	[INFO] 10.244.1.2:47166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064061s
	[INFO] 10.244.0.4:52772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107779s
	[INFO] 10.244.0.4:34695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110706s
	[INFO] 10.244.2.2:47248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010537s
	[INFO] 10.244.1.2:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175618s
	[INFO] 10.244.1.2:56731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211211s
	[INFO] 10.244.1.2:47156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137189s
	[INFO] 10.244.1.2:57441 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161046s
	[INFO] 10.244.0.4:45937 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064288s
	[INFO] 10.244.0.4:50125 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003887s
	[INFO] 10.244.2.2:38937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134308s
	[INFO] 10.244.2.2:34039 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085147s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc8f63fef0029c9f7bede5603ab9af3193a75bd4fc1106b23c316d4ce6b6705a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2759": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2759": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2759": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2759": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2759": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2750": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2782": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [fdbecc258023e10eac66da5599945eae2f7f8735769b825a69aea8b2effce668] <==
	[INFO] 10.244.1.2:60397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013328418s
	[INFO] 10.244.1.2:34848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138348s
	[INFO] 10.244.0.4:53254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147619s
	[INFO] 10.244.0.4:37575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103362s
	[INFO] 10.244.0.4:54948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181862s
	[INFO] 10.244.0.4:39944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365258s
	[INFO] 10.244.0.4:55239 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017828s
	[INFO] 10.244.0.4:57467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097919s
	[INFO] 10.244.2.2:35971 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096406s
	[INFO] 10.244.2.2:38423 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334812s
	[INFO] 10.244.2.2:42352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153771s
	[INFO] 10.244.2.2:40734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099488s
	[INFO] 10.244.2.2:34598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136946s
	[INFO] 10.244.1.2:54219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087067s
	[INFO] 10.244.0.4:58452 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093948s
	[INFO] 10.244.0.4:35784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061499s
	[INFO] 10.244.2.2:54391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149082s
	[INFO] 10.244.2.2:39850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109311s
	[INFO] 10.244.2.2:39330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101321s
	[INFO] 10.244.0.4:56550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137331s
	[INFO] 10.244.0.4:42317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097716s
	[INFO] 10.244.2.2:34210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106975s
	[INFO] 10.244.2.2:40755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028708s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.363785] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051848] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.189543] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.108878] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.262803] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077728] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +5.011635] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.054415] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.849379] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.148784] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[Jun 3 10:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057593] kauditd_printk_skb: 34 callbacks suppressed
	[Jun 3 10:59] kauditd_printk_skb: 30 callbacks suppressed
	[Jun 3 11:08] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +0.155437] systemd-fstab-generator[3744]: Ignoring "noauto" option for root device
	[  +0.187815] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.153201] systemd-fstab-generator[3770]: Ignoring "noauto" option for root device
	[  +0.283321] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[Jun 3 11:09] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +6.669814] kauditd_printk_skb: 122 callbacks suppressed
	[ +17.441291] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.224130] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.398569] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [09fff5459f24c748a0e085f496bf2b65db572d97be0afe906f05511398bdb0ad] <==
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.361652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.180966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T11:07:26.361662Z","caller":"traceutil/trace.go:171","msg":"trace[1176087128] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"298.197396ms","start":"2024-06-03T11:07:26.063461Z","end":"2024-06-03T11:07:26.361659Z","steps":["trace[1176087128] 'agreement among raft nodes before linearized reading'  (duration: 298.187306ms)"],"step_count":1}
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/03 11:07:26 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T11:07:26.410189Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:07:26.410266Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.116:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:07:26.410368Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"8b2d6b6d639b2fdb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-03T11:07:26.410582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410665Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410722Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"186d66165cd2cce"}
	{"level":"info","ts":"2024-06-03T11:07:26.410891Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410925Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.410958Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411127Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411224Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8b2d6b6d639b2fdb","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.411251Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4f87f407f126f7fc"}
	{"level":"info","ts":"2024-06-03T11:07:26.414171Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.414318Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2024-06-03T11:07:26.41435Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-683480","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.116:2380"],"advertise-client-urls":["https://192.168.39.116:2379"]}
	
	
	==> etcd [127d736575af20a24c0db0a6e3425badf2d41fcea00d489114e889360664fd0e] <==
	{"level":"warn","ts":"2024-06-03T11:14:53.928748Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-03T11:14:54.248789Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"186d66165cd2cce","rtt":"10.198788ms","error":"dial tcp 192.168.39.127:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-03T11:14:54.24881Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"186d66165cd2cce","rtt":"989.665µs","error":"dial tcp 192.168.39.127:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-03T11:14:54.429801Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-06-03T11:14:54.856907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:54.856958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:54.857035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:54.85706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb [logterm: 3, index: 3259] sent MsgPreVote request to 186d66165cd2cce at term 3"}
	{"level":"warn","ts":"2024-06-03T11:14:54.930658Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-03T11:14:55.431058Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-03T11:14:55.931402Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-06-03T11:14:56.056796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:56.056906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:56.056939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:56.056972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb [logterm: 3, index: 3259] sent MsgPreVote request to 186d66165cd2cce at term 3"}
	{"level":"warn","ts":"2024-06-03T11:14:56.43191Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3448508122420495697,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-03T11:14:56.923911Z","caller":"etcdserver/v3_server.go:909","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:14:57.256705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:57.256768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:57.256783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:57.256802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb [logterm: 3, index: 3259] sent MsgPreVote request to 186d66165cd2cce at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:58.456194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:58.456249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:58.456263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2024-06-03T11:14:58.456277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb [logterm: 3, index: 3259] sent MsgPreVote request to 186d66165cd2cce at term 3"}
	
	
	==> kernel <==
	 11:14:58 up 18 min,  0 users,  load average: 0.45, 0.58, 0.36
	Linux ha-683480 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7d1ac2921b8b2d8f877d5a779925f18de053d8e4b9a00c1636fd342ff8281f59] <==
	I0603 11:14:15.487286       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 11:14:15.487435       1 main.go:107] hostIP = 192.168.39.116
	podIP = 192.168.39.116
	I0603 11:14:15.487628       1 main.go:116] setting mtu 1500 for CNI 
	I0603 11:14:15.487673       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 11:14:15.487711       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 11:14:18.065383       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:14:20.305802       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 11:14:28.943410       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0603 11:14:42.949346       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0603 11:14:51.025945       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397] <==
	E0603 11:14:42.950275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: failed to list *v1.IngressClass: etcdserver: request timed out
	I0603 11:14:42.951277       1 trace.go:236] Trace[469744409]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:356f145a-37bf-4467-b113-cd579d7681e1,client:127.0.0.1,api-group:admissionregistration.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:validatingwebhookconfigurations,scope:cluster,url:/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (03-Jun-2024 11:14:35.043) (total time: 7907ms):
	Trace[469744409]: ["List(recursive=true) etcd3" audit-id:356f145a-37bf-4467-b113-cd579d7681e1,key:/validatingwebhookconfigurations,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 7907ms (11:14:35.043)]
	Trace[469744409]: [7.907383568s] [7.907383568s] END
	I0603 11:14:42.951460       1 trace.go:236] Trace[1492106004]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4c78be17-1924-4d5f-af82-5e226b3cc3e7,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:configmaps,scope:namespace,url:/api/v1/namespaces/kube-system/configmaps,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (03-Jun-2024 11:14:33.862) (total time: 9088ms):
	Trace[1492106004]: ["List(recursive=true) etcd3" audit-id:4c78be17-1924-4d5f-af82-5e226b3cc3e7,key:/configmaps/kube-system,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 9088ms (11:14:33.862)]
	Trace[1492106004]: [9.088786481s] [9.088786481s] END
	I0603 11:14:42.951606       1 trace.go:236] Trace[1540146982]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:adb67e6d-3a68-42a8-9f35-2ed5b8091c52,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:secrets,scope:cluster,url:/api/v1/secrets,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (03-Jun-2024 11:14:34.862) (total time: 8088ms):
	Trace[1540146982]: ["List(recursive=true) etcd3" audit-id:adb67e6d-3a68-42a8-9f35-2ed5b8091c52,key:/secrets,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 8088ms (11:14:34.862)]
	Trace[1540146982]: [8.088877107s] [8.088877107s] END
	W0603 11:14:42.951900       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Secret: etcdserver: request timed out
	E0603 11:14:42.951940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Secret: failed to list *v1.Secret: etcdserver: request timed out
	W0603 11:14:42.952034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: request timed out
	E0603 11:14:42.952045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: request timed out
	W0603 11:14:42.952090       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: etcdserver: request timed out
	E0603 11:14:42.952099       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: etcdserver: request timed out
	E0603 11:14:49.923829       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	E0603 11:14:49.923925       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0603 11:14:49.924123       1 trace.go:236] Trace[289112613]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:2dcac492-35ff-4cd2-9f1c-85679d52c7fd,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:GET (03-Jun-2024 11:14:35.923) (total time: 14000ms):
	Trace[289112613]: [14.000433839s] [14.000433839s] END
	W0603 11:14:49.924617       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0603 11:14:49.924678       1 hooks.go:203] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	I0603 11:14:49.953484       1 trace.go:236] Trace[755253275]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:375042b3-c9ff-49dc-ba0f-31986118508e,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:cluster,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (03-Jun-2024 11:14:35.925) (total time: 14015ms):
	Trace[755253275]: ["List(recursive=true) etcd3" audit-id:375042b3-c9ff-49dc-ba0f-31986118508e,key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:0,continue: 14027ms (11:14:35.925)]
	Trace[755253275]: [14.015474341s] [14.015474341s] END
	
	
	==> kube-controller-manager [9034d276d18e7ad0470a79b0643e03089b4cfa18ddd108b2966e84511a0a8276] <==
	I0603 11:09:09.767181       1 serving.go:380] Generated self-signed cert in-memory
	I0603 11:09:10.146137       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 11:09:10.146233       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:10.147699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 11:09:10.148507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:09:10.149450       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:09:10.149573       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0603 11:09:30.500169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.116:8443/healthz\": dial tcp 192.168.39.116:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f11bba0fe671eec93d2ed313c2be83ba1241f460d7349102758825c301c05c94] <==
	W0603 11:14:53.009100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.NetworkPolicy: Get "https://192.168.39.116:8443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=2775": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:53.009230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://192.168.39.116:8443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=2775": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:53.174507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=2763": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:53.174605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.116:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=2763": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:53.407385       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: Get "https://192.168.39.116:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?resourceVersion=2758": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:53.407481       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Get "https://192.168.39.116:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?resourceVersion=2758": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:54.009246       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:54.009356       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-683480-m02"
	E0603 11:14:54.009376       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.116:8443/api/v1/nodes/ha-683480-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0603 11:14:55.633777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.DaemonSet: Get "https://192.168.39.116:8443/apis/apps/v1/daemonsets?resourceVersion=2762": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:55.633845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.DaemonSet: failed to list *v1.DaemonSet: Get "https://192.168.39.116:8443/apis/apps/v1/daemonsets?resourceVersion=2762": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:55.806500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ServiceAccount: Get "https://192.168.39.116:8443/api/v1/serviceaccounts?resourceVersion=2681": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:55.806563       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.39.116:8443/api/v1/serviceaccounts?resourceVersion=2681": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:56.926484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?resourceVersion=2774": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:56.926590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.116:8443/api/v1/services?resourceVersion=2774": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:56.964637       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:14:56.964751       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:14:56.964777       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:14:56.964800       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	E0603 11:14:56.964832       1 gc_controller.go:153] "Failed to get node" err="node \"ha-683480-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683480-m03"
	W0603 11:14:56.965381       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:57.466506       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:58.467450       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.116:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.116:8443: connect: connection refused
	W0603 11:14:58.510851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicyBinding: Get "https://192.168.39.116:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicybindings?resourceVersion=2759": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:58.510907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicyBinding: failed to list *v1.ValidatingAdmissionPolicyBinding: Get "https://192.168.39.116:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicybindings?resourceVersion=2759": dial tcp 192.168.39.116:8443: connect: connection refused
	
	
	==> kube-proxy [48e4f287c203959b7515afda7bbc9f297b67f159d98c275d36cabdf2d658267e] <==
	E0603 11:09:24.882085       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 11:09:34.097467       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0603 11:09:52.259753       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	I0603 11:09:52.344108       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:09:52.345840       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:09:52.345932       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:09:52.355300       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:09:52.355568       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:09:52.355614       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:09:52.357919       1 config.go:319] "Starting node config controller"
	I0603 11:09:52.358047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:09:52.359612       1 config.go:192] "Starting service config controller"
	I0603 11:09:52.359646       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:09:52.359672       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:09:52.359677       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:09:52.459322       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:09:52.460480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:09:52.460706       1 shared_informer.go:320] Caches are synced for service config
	E0603 11:14:06.033560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2728&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:14:06.033916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2684&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:14:27.540130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2734&timeout=7m55s&timeoutSeconds=475&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:14:39.825743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2684": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:14:39.826363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2684": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:14:42.899052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:14:42.899269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2728": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [bcb102231e3a6bc3ea0cc39665baaebb0a97c42874b6cd34e86c04e87532df4f] <==
	E0603 11:06:17.493811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:20.563799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:20.563929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.705898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.706300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.706388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.707491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:26.707719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:26.708260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:35.922579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:35.922652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:38.993673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:38.993906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:42.065889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:42.066121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.354677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:06:54.355222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:06:54.355344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1958": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 11:07:00.497526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 11:07:00.497586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2006": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [031c8a2316fc402ab581c065b6ef53496a23534ae41d34c7fb6e7ff35cb3260d] <==
	E0603 11:14:28.521147       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:14:29.746206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:14:29.746267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:14:30.779641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 11:14:30.779691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 11:14:31.006420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 11:14:31.006469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 11:14:31.114094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:14:31.114183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:14:31.794386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:14:31.794416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:14:33.308886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:14:33.309044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:14:35.751522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 11:14:35.751573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 11:14:37.961889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 11:14:37.961943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 11:14:38.502484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:14:38.502584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:14:39.365347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:14:39.365439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 11:14:40.609141       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:14:40.609234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:14:57.534741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?resourceVersion=2785": dial tcp 192.168.39.116:8443: connect: connection refused
	E0603 11:14:57.534882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.116:8443/api/v1/persistentvolumeclaims?resourceVersion=2785": dial tcp 192.168.39.116:8443: connect: connection refused
	
	
	==> kube-scheduler [c282307764128f62fdee736d5e1ecddfbca0ae7ae2f78b7a78cbdb2dcede8556] <==
	W0603 11:07:23.250863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:07:23.250961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:07:23.440507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:07:23.440556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:07:23.850822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:07:23.850873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:07:23.881110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 11:07:23.881156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:07:24.137270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:07:24.137360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:07:24.562679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 11:07:24.562728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 11:07:24.579219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.579267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.588574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:07:24.588662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:07:24.778888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:07:24.779044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:07:24.957511       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:07:24.957601       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:07:24.991872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:07:24.991960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 11:07:26.347620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0603 11:07:26.347788       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0603 11:07:26.347875       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 11:14:51 ha-683480 kubelet[1378]: I0603 11:14:51.088713    1378 scope.go:117] "RemoveContainer" containerID="6e5e3d5bc9fc79b80086f615283ff566f7ed37106ad7b4da30b519ce27777dce"
	Jun 03 11:14:51 ha-683480 kubelet[1378]: E0603 11:14:51.089215    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807)\"" pod="kube-system/storage-provisioner" podUID="a410a98d-73a7-434b-88ce-575c300b2807"
	Jun 03 11:14:51 ha-683480 kubelet[1378]: I0603 11:14:51.602145    1378 scope.go:117] "RemoveContainer" containerID="5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397"
	Jun 03 11:14:51 ha-683480 kubelet[1378]: E0603 11:14:51.602747    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-683480_kube-system(b448fd1c84d729fa6b033c44220aea0b)\"" pod="kube-system/kube-apiserver-ha-683480" podUID="b448fd1c84d729fa6b033c44220aea0b"
	Jun 03 11:14:52 ha-683480 kubelet[1378]: W0603 11:14:52.113450    1378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2613": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:52 ha-683480 kubelet[1378]: I0603 11:14:52.113827    1378 status_manager.go:853] "Failed to get status for pod" podUID="a410a98d-73a7-434b-88ce-575c300b2807" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 11:14:52 ha-683480 kubelet[1378]: E0603 11:14:52.113891    1378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-683480?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jun 03 11:14:52 ha-683480 kubelet[1378]: E0603 11:14:52.113947    1378 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-683480\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 11:14:52 ha-683480 kubelet[1378]: W0603 11:14:52.113744    1378 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=2592": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:52 ha-683480 kubelet[1378]: E0603 11:14:52.115169    1378 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=2592": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:52 ha-683480 kubelet[1378]: E0603 11:14:52.115301    1378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-683480&resourceVersion=2613": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:55 ha-683480 kubelet[1378]: E0603 11:14:55.185439    1378 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-683480\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: W0603 11:14:55.185555    1378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2751": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:55 ha-683480 kubelet[1378]: E0603 11:14:55.185619    1378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2751": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:55 ha-683480 kubelet[1378]: W0603 11:14:55.185683    1378 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2565": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:55 ha-683480 kubelet[1378]: E0603 11:14:55.185708    1378 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2565": dial tcp 192.168.39.254:8443: connect: no route to host
	Jun 03 11:14:55 ha-683480 kubelet[1378]: I0603 11:14:55.185435    1378 status_manager.go:853] "Failed to get status for pod" podUID="b448fd1c84d729fa6b033c44220aea0b" pod="kube-system/kube-apiserver-ha-683480" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683480\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: I0603 11:14:55.626712    1378 scope.go:117] "RemoveContainer" containerID="c3ea180b8216797aaf78ea5661ba3b0943d85bfcde1c3ce755f4e62582ab5ecf"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: I0603 11:14:55.627122    1378 scope.go:117] "RemoveContainer" containerID="7d1ac2921b8b2d8f877d5a779925f18de053d8e4b9a00c1636fd342ff8281f59"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: E0603 11:14:55.627370    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-zxhbp_kube-system(320e315b-e189-4358-9e56-a4be7d944fae)\"" pod="kube-system/kindnet-zxhbp" podUID="320e315b-e189-4358-9e56-a4be7d944fae"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: I0603 11:14:55.914446    1378 scope.go:117] "RemoveContainer" containerID="5c7cd9a2289254078ecf194aacfa4c616d98f7b8884a5f87f73c228094daa397"
	Jun 03 11:14:55 ha-683480 kubelet[1378]: E0603 11:14:55.914859    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-683480_kube-system(b448fd1c84d729fa6b033c44220aea0b)\"" pod="kube-system/kube-apiserver-ha-683480" podUID="b448fd1c84d729fa6b033c44220aea0b"
	Jun 03 11:14:58 ha-683480 kubelet[1378]: E0603 11:14:58.257370    1378 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-683480\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683480?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 11:14:58 ha-683480 kubelet[1378]: E0603 11:14:58.257341    1378 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/storage-provisioner.17d579fe8d9dc60b\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{storage-provisioner.17d579fe8d9dc60b  kube-system   2370 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:storage-provisioner,UID:a410a98d-73a7-434b-88ce-575c300b2807,APIVersion:v1,ResourceVersion:444,FieldPath:spec.containers{storage-provisioner},},Reason:BackOff,Message:Back-off restarting failed container storage-provisioner in pod storage-provisioner_kube-system(a410a98d-73a7-434b-88ce-575c300b2807),Source:EventSource{Component:kubelet,Host:ha-683480,},FirstTimestamp:2024-06-03 11:09:27 +0000 UTC,LastTimestamp:2024-06-03 11:12:20.838793647 +0000 UTC m=+920.876148375,
Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-683480,}"
	Jun 03 11:14:58 ha-683480 kubelet[1378]: I0603 11:14:58.257511    1378 status_manager.go:853] "Failed to get status for pod" podUID="a410a98d-73a7-434b-88ce-575c300b2807" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:14:57.776347   34689 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19008-7755/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683480 -n ha-683480: exit status 2 (213.327069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-683480" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (172.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (311.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505550
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-505550
E0603 11:30:19.215426   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-505550: exit status 82 (2m2.669060387s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-505550-m03"  ...
	* Stopping node "multinode-505550-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-505550" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505550 --wait=true -v=8 --alsologtostderr
E0603 11:32:12.037627   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:33:22.259988   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505550 --wait=true -v=8 --alsologtostderr: (3m6.946253167s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505550
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-505550 -n multinode-505550
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-505550 logs -n 25: (1.547779633s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550:/home/docker/cp-test_multinode-505550-m02_multinode-505550.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550 sudo cat                                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m02_multinode-505550.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03:/home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550-m03 sudo cat                                   | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp testdata/cp-test.txt                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550:/home/docker/cp-test_multinode-505550-m03_multinode-505550.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550 sudo cat                                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02:/home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550-m02 sudo cat                                   | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-505550 node stop m03                                                          | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	| node    | multinode-505550 node start                                                             | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC | 03 Jun 24 11:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| stop    | -p multinode-505550                                                                     | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| start   | -p multinode-505550                                                                     | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:30 UTC | 03 Jun 24 11:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:30:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:30:32.461727   44162 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:30:32.461955   44162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:30:32.461964   44162 out.go:304] Setting ErrFile to fd 2...
	I0603 11:30:32.461968   44162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:30:32.462114   44162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:30:32.462613   44162 out.go:298] Setting JSON to false
	I0603 11:30:32.463552   44162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4377,"bootTime":1717409855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:30:32.463606   44162 start.go:139] virtualization: kvm guest
	I0603 11:30:32.465924   44162 out.go:177] * [multinode-505550] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:30:32.467333   44162 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:30:32.467327   44162 notify.go:220] Checking for updates...
	I0603 11:30:32.468759   44162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:30:32.470198   44162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:30:32.471416   44162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:30:32.472545   44162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:30:32.473807   44162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:30:32.475399   44162 config.go:182] Loaded profile config "multinode-505550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:30:32.475515   44162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:30:32.475900   44162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:30:32.475942   44162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:30:32.490765   44162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I0603 11:30:32.491247   44162 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:30:32.491953   44162 main.go:141] libmachine: Using API Version  1
	I0603 11:30:32.492000   44162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:30:32.492299   44162 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:30:32.492488   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.528449   44162 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:30:32.529715   44162 start.go:297] selected driver: kvm2
	I0603 11:30:32.529738   44162 start.go:901] validating driver "kvm2" against &{Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:30:32.529900   44162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:30:32.530255   44162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:30:32.530357   44162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:30:32.544545   44162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:30:32.545222   44162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:30:32.545275   44162 cni.go:84] Creating CNI manager for ""
	I0603 11:30:32.545286   44162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 11:30:32.545343   44162 start.go:340] cluster config:
	{Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:30:32.545468   44162 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:30:32.547219   44162 out.go:177] * Starting "multinode-505550" primary control-plane node in "multinode-505550" cluster
	I0603 11:30:32.548552   44162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:30:32.548587   44162 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:30:32.548599   44162 cache.go:56] Caching tarball of preloaded images
	I0603 11:30:32.548704   44162 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:30:32.548718   44162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:30:32.548841   44162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/config.json ...
	I0603 11:30:32.549014   44162 start.go:360] acquireMachinesLock for multinode-505550: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:30:32.549053   44162 start.go:364] duration metric: took 21.169µs to acquireMachinesLock for "multinode-505550"
	I0603 11:30:32.549070   44162 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:30:32.549080   44162 fix.go:54] fixHost starting: 
	I0603 11:30:32.549326   44162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:30:32.549359   44162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:30:32.563189   44162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0603 11:30:32.563575   44162 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:30:32.564022   44162 main.go:141] libmachine: Using API Version  1
	I0603 11:30:32.564040   44162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:30:32.564392   44162 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:30:32.564570   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.564716   44162 main.go:141] libmachine: (multinode-505550) Calling .GetState
	I0603 11:30:32.566063   44162 fix.go:112] recreateIfNeeded on multinode-505550: state=Running err=<nil>
	W0603 11:30:32.566080   44162 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:30:32.567913   44162 out.go:177] * Updating the running kvm2 "multinode-505550" VM ...
	I0603 11:30:32.569138   44162 machine.go:94] provisionDockerMachine start ...
	I0603 11:30:32.569155   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.569373   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.571598   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.572020   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.572061   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.572138   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.572287   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.572452   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.572574   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.572720   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.572943   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.572958   44162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:30:32.676109   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-505550
	
	I0603 11:30:32.676133   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.676340   44162 buildroot.go:166] provisioning hostname "multinode-505550"
	I0603 11:30:32.676364   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.676540   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.678867   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.679193   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.679220   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.679338   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.679491   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.679656   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.679798   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.679941   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.680135   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.680149   44162 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-505550 && echo "multinode-505550" | sudo tee /etc/hostname
	I0603 11:30:32.798924   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-505550
	
	I0603 11:30:32.798950   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.801439   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.801800   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.801828   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.801990   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.802190   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.802322   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.802444   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.802582   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.802760   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.802777   44162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-505550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-505550/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-505550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:30:32.908591   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:30:32.908625   44162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:30:32.908657   44162 buildroot.go:174] setting up certificates
	I0603 11:30:32.908668   44162 provision.go:84] configureAuth start
	I0603 11:30:32.908680   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.908942   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:30:32.911399   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.911779   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.911807   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.911911   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.913912   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.914245   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.914272   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.914356   44162 provision.go:143] copyHostCerts
	I0603 11:30:32.914386   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:30:32.914433   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:30:32.914449   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:30:32.914518   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:30:32.914613   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:30:32.914637   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:30:32.914647   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:30:32.914684   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:30:32.914745   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:30:32.914768   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:30:32.914778   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:30:32.914810   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:30:32.914869   44162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.multinode-505550 san=[127.0.0.1 192.168.39.232 localhost minikube multinode-505550]
	I0603 11:30:33.076772   44162 provision.go:177] copyRemoteCerts
	I0603 11:30:33.076836   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:30:33.076866   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:33.079423   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.079711   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:33.079744   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.079909   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:33.080106   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.080253   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:33.080374   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:30:33.161596   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:30:33.161667   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:30:33.187247   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:30:33.187303   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 11:30:33.211964   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:30:33.212024   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:30:33.235945   44162 provision.go:87] duration metric: took 327.266479ms to configureAuth
	I0603 11:30:33.235970   44162 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:30:33.236200   44162 config.go:182] Loaded profile config "multinode-505550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:30:33.236286   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:33.238856   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.239207   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:33.239240   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.239367   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:33.239526   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.239693   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.239841   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:33.240000   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:33.240205   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:33.240221   44162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:32:04.030917   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:32:04.030938   44162 machine.go:97] duration metric: took 1m31.461789042s to provisionDockerMachine
	I0603 11:32:04.030948   44162 start.go:293] postStartSetup for "multinode-505550" (driver="kvm2")
	I0603 11:32:04.030957   44162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:32:04.030979   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.031326   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:32:04.031348   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.034334   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.034769   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.034797   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.034914   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.035156   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.035342   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.035476   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.118465   44162 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:32:04.122615   44162 command_runner.go:130] > NAME=Buildroot
	I0603 11:32:04.122632   44162 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 11:32:04.122643   44162 command_runner.go:130] > ID=buildroot
	I0603 11:32:04.122648   44162 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 11:32:04.122653   44162 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 11:32:04.122696   44162 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:32:04.122712   44162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:32:04.122786   44162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:32:04.122878   44162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:32:04.122888   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:32:04.122988   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:32:04.132630   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:32:04.155730   44162 start.go:296] duration metric: took 124.772546ms for postStartSetup
	I0603 11:32:04.155758   44162 fix.go:56] duration metric: took 1m31.606678549s for fixHost
	I0603 11:32:04.155778   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.158836   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.159305   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.159331   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.159488   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.159654   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.159816   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.159933   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.160090   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:32:04.160252   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:32:04.160263   44162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:32:04.259729   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717414324.241010823
	
	I0603 11:32:04.259751   44162 fix.go:216] guest clock: 1717414324.241010823
	I0603 11:32:04.259760   44162 fix.go:229] Guest: 2024-06-03 11:32:04.241010823 +0000 UTC Remote: 2024-06-03 11:32:04.15576097 +0000 UTC m=+91.728185948 (delta=85.249853ms)
	I0603 11:32:04.259784   44162 fix.go:200] guest clock delta is within tolerance: 85.249853ms
	I0603 11:32:04.259791   44162 start.go:83] releasing machines lock for "multinode-505550", held for 1m31.710727419s
	I0603 11:32:04.259811   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.260061   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:32:04.263129   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.263509   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.263530   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.263910   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264435   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264623   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264727   44162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:32:04.264773   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.264869   44162 ssh_runner.go:195] Run: cat /version.json
	I0603 11:32:04.264893   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.267507   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267653   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267914   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.267942   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267986   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.268005   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.268070   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.268305   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.268306   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.268485   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.268489   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.268673   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.268688   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.268857   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.344125   44162 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 11:32:04.344398   44162 ssh_runner.go:195] Run: systemctl --version
	I0603 11:32:04.368059   44162 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 11:32:04.368107   44162 command_runner.go:130] > systemd 252 (252)
	I0603 11:32:04.368124   44162 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 11:32:04.368170   44162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:32:04.524793   44162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 11:32:04.532491   44162 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 11:32:04.532851   44162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:32:04.532936   44162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:32:04.541990   44162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:32:04.542013   44162 start.go:494] detecting cgroup driver to use...
	I0603 11:32:04.542073   44162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:32:04.557630   44162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:32:04.571051   44162 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:32:04.571094   44162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:32:04.584150   44162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:32:04.597247   44162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:32:04.740042   44162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:32:04.885107   44162 docker.go:233] disabling docker service ...
	I0603 11:32:04.885190   44162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:32:04.903567   44162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:32:04.916865   44162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:32:05.062613   44162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:32:05.200844   44162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:32:05.215625   44162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:32:05.234463   44162 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0603 11:32:05.235075   44162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:32:05.235139   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.245568   44162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:32:05.245616   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.255670   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.265547   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.275561   44162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:32:05.285789   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.295482   44162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.307151   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.317424   44162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:32:05.326629   44162 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 11:32:05.326722   44162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:32:05.335701   44162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:32:05.472073   44162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:32:13.326919   44162 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.854807981s)
	I0603 11:32:13.326949   44162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:32:13.327008   44162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:32:13.332392   44162 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0603 11:32:13.332420   44162 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 11:32:13.332430   44162 command_runner.go:130] > Device: 0,22	Inode: 1357        Links: 1
	I0603 11:32:13.332441   44162 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 11:32:13.332449   44162 command_runner.go:130] > Access: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332457   44162 command_runner.go:130] > Modify: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332464   44162 command_runner.go:130] > Change: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332478   44162 command_runner.go:130] >  Birth: -
	I0603 11:32:13.332500   44162 start.go:562] Will wait 60s for crictl version
	I0603 11:32:13.332545   44162 ssh_runner.go:195] Run: which crictl
	I0603 11:32:13.336552   44162 command_runner.go:130] > /usr/bin/crictl
	I0603 11:32:13.336621   44162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:32:13.374349   44162 command_runner.go:130] > Version:  0.1.0
	I0603 11:32:13.374370   44162 command_runner.go:130] > RuntimeName:  cri-o
	I0603 11:32:13.374375   44162 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0603 11:32:13.374380   44162 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 11:32:13.374395   44162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:32:13.374443   44162 ssh_runner.go:195] Run: crio --version
	I0603 11:32:13.402140   44162 command_runner.go:130] > crio version 1.29.1
	I0603 11:32:13.402172   44162 command_runner.go:130] > Version:        1.29.1
	I0603 11:32:13.402177   44162 command_runner.go:130] > GitCommit:      unknown
	I0603 11:32:13.402182   44162 command_runner.go:130] > GitCommitDate:  unknown
	I0603 11:32:13.402185   44162 command_runner.go:130] > GitTreeState:   clean
	I0603 11:32:13.402191   44162 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 11:32:13.402195   44162 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 11:32:13.402199   44162 command_runner.go:130] > Compiler:       gc
	I0603 11:32:13.402203   44162 command_runner.go:130] > Platform:       linux/amd64
	I0603 11:32:13.402207   44162 command_runner.go:130] > Linkmode:       dynamic
	I0603 11:32:13.402211   44162 command_runner.go:130] > BuildTags:      
	I0603 11:32:13.402215   44162 command_runner.go:130] >   containers_image_ostree_stub
	I0603 11:32:13.402219   44162 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 11:32:13.402223   44162 command_runner.go:130] >   btrfs_noversion
	I0603 11:32:13.402226   44162 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 11:32:13.402231   44162 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 11:32:13.402234   44162 command_runner.go:130] >   seccomp
	I0603 11:32:13.402239   44162 command_runner.go:130] > LDFlags:          unknown
	I0603 11:32:13.402246   44162 command_runner.go:130] > SeccompEnabled:   true
	I0603 11:32:13.402250   44162 command_runner.go:130] > AppArmorEnabled:  false
	I0603 11:32:13.403362   44162 ssh_runner.go:195] Run: crio --version
	I0603 11:32:13.432407   44162 command_runner.go:130] > crio version 1.29.1
	I0603 11:32:13.432436   44162 command_runner.go:130] > Version:        1.29.1
	I0603 11:32:13.432445   44162 command_runner.go:130] > GitCommit:      unknown
	I0603 11:32:13.432452   44162 command_runner.go:130] > GitCommitDate:  unknown
	I0603 11:32:13.432458   44162 command_runner.go:130] > GitTreeState:   clean
	I0603 11:32:13.432466   44162 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 11:32:13.432473   44162 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 11:32:13.432478   44162 command_runner.go:130] > Compiler:       gc
	I0603 11:32:13.432485   44162 command_runner.go:130] > Platform:       linux/amd64
	I0603 11:32:13.432491   44162 command_runner.go:130] > Linkmode:       dynamic
	I0603 11:32:13.432498   44162 command_runner.go:130] > BuildTags:      
	I0603 11:32:13.432509   44162 command_runner.go:130] >   containers_image_ostree_stub
	I0603 11:32:13.432515   44162 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 11:32:13.432519   44162 command_runner.go:130] >   btrfs_noversion
	I0603 11:32:13.432523   44162 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 11:32:13.432528   44162 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 11:32:13.432534   44162 command_runner.go:130] >   seccomp
	I0603 11:32:13.432539   44162 command_runner.go:130] > LDFlags:          unknown
	I0603 11:32:13.432545   44162 command_runner.go:130] > SeccompEnabled:   true
	I0603 11:32:13.432549   44162 command_runner.go:130] > AppArmorEnabled:  false
	I0603 11:32:13.434334   44162 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:32:13.435565   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:32:13.438155   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:13.438456   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:13.438484   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:13.438676   44162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:32:13.442811   44162 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0603 11:32:13.443011   44162 kubeadm.go:877] updating cluster {Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:32:13.443188   44162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:32:13.443233   44162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:32:13.486914   44162 command_runner.go:130] > {
	I0603 11:32:13.486933   44162 command_runner.go:130] >   "images": [
	I0603 11:32:13.486937   44162 command_runner.go:130] >     {
	I0603 11:32:13.486944   44162 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 11:32:13.486951   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.486957   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 11:32:13.486960   44162 command_runner.go:130] >       ],
	I0603 11:32:13.486964   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.486974   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 11:32:13.486981   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 11:32:13.486984   44162 command_runner.go:130] >       ],
	I0603 11:32:13.486989   44162 command_runner.go:130] >       "size": "65291810",
	I0603 11:32:13.486996   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487003   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487008   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487015   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487019   44162 command_runner.go:130] >     },
	I0603 11:32:13.487024   44162 command_runner.go:130] >     {
	I0603 11:32:13.487030   44162 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 11:32:13.487050   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487055   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 11:32:13.487062   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487067   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487078   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 11:32:13.487085   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 11:32:13.487090   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487094   44162 command_runner.go:130] >       "size": "65908273",
	I0603 11:32:13.487098   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487104   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487111   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487116   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487121   44162 command_runner.go:130] >     },
	I0603 11:32:13.487125   44162 command_runner.go:130] >     {
	I0603 11:32:13.487140   44162 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 11:32:13.487146   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487151   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 11:32:13.487157   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487160   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487168   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 11:32:13.487177   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 11:32:13.487181   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487187   44162 command_runner.go:130] >       "size": "1363676",
	I0603 11:32:13.487191   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487197   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487202   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487208   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487211   44162 command_runner.go:130] >     },
	I0603 11:32:13.487217   44162 command_runner.go:130] >     {
	I0603 11:32:13.487223   44162 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 11:32:13.487229   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487234   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 11:32:13.487240   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487248   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487257   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 11:32:13.487270   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 11:32:13.487273   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487277   44162 command_runner.go:130] >       "size": "31470524",
	I0603 11:32:13.487280   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487284   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487288   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487292   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487295   44162 command_runner.go:130] >     },
	I0603 11:32:13.487298   44162 command_runner.go:130] >     {
	I0603 11:32:13.487311   44162 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 11:32:13.487315   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487320   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 11:32:13.487323   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487327   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487334   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 11:32:13.487346   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 11:32:13.487349   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487353   44162 command_runner.go:130] >       "size": "61245718",
	I0603 11:32:13.487356   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487360   44162 command_runner.go:130] >       "username": "nonroot",
	I0603 11:32:13.487363   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487367   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487370   44162 command_runner.go:130] >     },
	I0603 11:32:13.487373   44162 command_runner.go:130] >     {
	I0603 11:32:13.487379   44162 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 11:32:13.487383   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487387   44162 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 11:32:13.487391   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487395   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487402   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 11:32:13.487411   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 11:32:13.487415   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487419   44162 command_runner.go:130] >       "size": "150779692",
	I0603 11:32:13.487425   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487429   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487433   44162 command_runner.go:130] >       },
	I0603 11:32:13.487436   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487440   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487444   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487447   44162 command_runner.go:130] >     },
	I0603 11:32:13.487450   44162 command_runner.go:130] >     {
	I0603 11:32:13.487456   44162 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 11:32:13.487462   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487467   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 11:32:13.487471   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487475   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487482   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 11:32:13.487491   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 11:32:13.487495   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487499   44162 command_runner.go:130] >       "size": "117601759",
	I0603 11:32:13.487506   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487514   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487520   44162 command_runner.go:130] >       },
	I0603 11:32:13.487523   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487527   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487531   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487536   44162 command_runner.go:130] >     },
	I0603 11:32:13.487540   44162 command_runner.go:130] >     {
	I0603 11:32:13.487545   44162 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 11:32:13.487550   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487555   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 11:32:13.487561   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487565   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487583   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 11:32:13.487594   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 11:32:13.487597   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487600   44162 command_runner.go:130] >       "size": "112170310",
	I0603 11:32:13.487604   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487608   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487612   44162 command_runner.go:130] >       },
	I0603 11:32:13.487615   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487619   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487623   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487626   44162 command_runner.go:130] >     },
	I0603 11:32:13.487629   44162 command_runner.go:130] >     {
	I0603 11:32:13.487635   44162 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 11:32:13.487639   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487643   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 11:32:13.487646   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487649   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487656   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 11:32:13.487663   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 11:32:13.487666   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487670   44162 command_runner.go:130] >       "size": "85933465",
	I0603 11:32:13.487673   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487676   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487680   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487688   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487692   44162 command_runner.go:130] >     },
	I0603 11:32:13.487695   44162 command_runner.go:130] >     {
	I0603 11:32:13.487700   44162 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 11:32:13.487704   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487709   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 11:32:13.487711   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487715   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487722   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 11:32:13.487728   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 11:32:13.487733   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487737   44162 command_runner.go:130] >       "size": "63026504",
	I0603 11:32:13.487742   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487746   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487752   44162 command_runner.go:130] >       },
	I0603 11:32:13.487755   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487760   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487765   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487769   44162 command_runner.go:130] >     },
	I0603 11:32:13.487772   44162 command_runner.go:130] >     {
	I0603 11:32:13.487779   44162 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 11:32:13.487783   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487787   44162 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 11:32:13.487793   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487797   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487803   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 11:32:13.487812   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 11:32:13.487815   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487819   44162 command_runner.go:130] >       "size": "750414",
	I0603 11:32:13.487824   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487828   44162 command_runner.go:130] >         "value": "65535"
	I0603 11:32:13.487834   44162 command_runner.go:130] >       },
	I0603 11:32:13.487838   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487841   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487845   44162 command_runner.go:130] >       "pinned": true
	I0603 11:32:13.487850   44162 command_runner.go:130] >     }
	I0603 11:32:13.487860   44162 command_runner.go:130] >   ]
	I0603 11:32:13.487865   44162 command_runner.go:130] > }
	I0603 11:32:13.488390   44162 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:32:13.488407   44162 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:32:13.488448   44162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:32:13.520322   44162 command_runner.go:130] > {
	I0603 11:32:13.520346   44162 command_runner.go:130] >   "images": [
	I0603 11:32:13.520351   44162 command_runner.go:130] >     {
	I0603 11:32:13.520358   44162 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 11:32:13.520364   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520370   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 11:32:13.520374   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520378   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520386   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 11:32:13.520393   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 11:32:13.520399   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520404   44162 command_runner.go:130] >       "size": "65291810",
	I0603 11:32:13.520408   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520414   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520420   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520426   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520430   44162 command_runner.go:130] >     },
	I0603 11:32:13.520433   44162 command_runner.go:130] >     {
	I0603 11:32:13.520439   44162 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 11:32:13.520444   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520449   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 11:32:13.520453   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520456   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520463   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 11:32:13.520471   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 11:32:13.520475   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520480   44162 command_runner.go:130] >       "size": "65908273",
	I0603 11:32:13.520483   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520489   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520495   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520498   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520504   44162 command_runner.go:130] >     },
	I0603 11:32:13.520509   44162 command_runner.go:130] >     {
	I0603 11:32:13.520516   44162 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 11:32:13.520520   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520527   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 11:32:13.520530   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520536   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520544   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 11:32:13.520553   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 11:32:13.520557   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520561   44162 command_runner.go:130] >       "size": "1363676",
	I0603 11:32:13.520565   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520571   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520575   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520580   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520584   44162 command_runner.go:130] >     },
	I0603 11:32:13.520589   44162 command_runner.go:130] >     {
	I0603 11:32:13.520595   44162 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 11:32:13.520601   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520606   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 11:32:13.520612   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520616   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520624   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 11:32:13.520636   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 11:32:13.520642   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520647   44162 command_runner.go:130] >       "size": "31470524",
	I0603 11:32:13.520653   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520657   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520665   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520671   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520675   44162 command_runner.go:130] >     },
	I0603 11:32:13.520681   44162 command_runner.go:130] >     {
	I0603 11:32:13.520686   44162 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 11:32:13.520690   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520695   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 11:32:13.520701   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520708   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520718   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 11:32:13.520725   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 11:32:13.520731   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520735   44162 command_runner.go:130] >       "size": "61245718",
	I0603 11:32:13.520741   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520745   44162 command_runner.go:130] >       "username": "nonroot",
	I0603 11:32:13.520749   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520753   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520758   44162 command_runner.go:130] >     },
	I0603 11:32:13.520762   44162 command_runner.go:130] >     {
	I0603 11:32:13.520770   44162 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 11:32:13.520774   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520779   44162 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 11:32:13.520782   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520786   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520793   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 11:32:13.520802   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 11:32:13.520806   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520811   44162 command_runner.go:130] >       "size": "150779692",
	I0603 11:32:13.520815   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520820   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520824   44162 command_runner.go:130] >       },
	I0603 11:32:13.520830   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520834   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520838   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520841   44162 command_runner.go:130] >     },
	I0603 11:32:13.520844   44162 command_runner.go:130] >     {
	I0603 11:32:13.520850   44162 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 11:32:13.520855   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520860   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 11:32:13.520866   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520870   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520879   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 11:32:13.520886   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 11:32:13.520892   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520898   44162 command_runner.go:130] >       "size": "117601759",
	I0603 11:32:13.520904   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520908   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520911   44162 command_runner.go:130] >       },
	I0603 11:32:13.520915   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520919   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520923   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520926   44162 command_runner.go:130] >     },
	I0603 11:32:13.520930   44162 command_runner.go:130] >     {
	I0603 11:32:13.520935   44162 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 11:32:13.520942   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520947   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 11:32:13.520952   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520957   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520970   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 11:32:13.520980   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 11:32:13.520983   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520988   44162 command_runner.go:130] >       "size": "112170310",
	I0603 11:32:13.520992   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520995   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520999   44162 command_runner.go:130] >       },
	I0603 11:32:13.521004   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521008   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521014   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521017   44162 command_runner.go:130] >     },
	I0603 11:32:13.521020   44162 command_runner.go:130] >     {
	I0603 11:32:13.521026   44162 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 11:32:13.521032   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521036   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 11:32:13.521042   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521046   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521056   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 11:32:13.521068   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 11:32:13.521072   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521094   44162 command_runner.go:130] >       "size": "85933465",
	I0603 11:32:13.521103   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.521108   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521112   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521116   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521119   44162 command_runner.go:130] >     },
	I0603 11:32:13.521122   44162 command_runner.go:130] >     {
	I0603 11:32:13.521128   44162 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 11:32:13.521134   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521139   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 11:32:13.521145   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521149   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521156   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 11:32:13.521165   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 11:32:13.521169   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521173   44162 command_runner.go:130] >       "size": "63026504",
	I0603 11:32:13.521176   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.521180   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.521183   44162 command_runner.go:130] >       },
	I0603 11:32:13.521187   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521191   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521195   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521198   44162 command_runner.go:130] >     },
	I0603 11:32:13.521204   44162 command_runner.go:130] >     {
	I0603 11:32:13.521210   44162 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 11:32:13.521216   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521220   44162 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 11:32:13.521226   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521230   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521237   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 11:32:13.521245   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 11:32:13.521249   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521255   44162 command_runner.go:130] >       "size": "750414",
	I0603 11:32:13.521258   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.521263   44162 command_runner.go:130] >         "value": "65535"
	I0603 11:32:13.521269   44162 command_runner.go:130] >       },
	I0603 11:32:13.521273   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521276   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521282   44162 command_runner.go:130] >       "pinned": true
	I0603 11:32:13.521287   44162 command_runner.go:130] >     }
	I0603 11:32:13.521291   44162 command_runner.go:130] >   ]
	I0603 11:32:13.521294   44162 command_runner.go:130] > }
	I0603 11:32:13.521410   44162 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:32:13.521419   44162 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:32:13.521427   44162 kubeadm.go:928] updating node { 192.168.39.232 8443 v1.30.1 crio true true} ...
	I0603 11:32:13.521514   44162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-505550 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:32:13.521571   44162 ssh_runner.go:195] Run: crio config
	I0603 11:32:13.561373   44162 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0603 11:32:13.561398   44162 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0603 11:32:13.561404   44162 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0603 11:32:13.561407   44162 command_runner.go:130] > #
	I0603 11:32:13.561414   44162 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0603 11:32:13.561420   44162 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0603 11:32:13.561430   44162 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0603 11:32:13.561453   44162 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0603 11:32:13.561460   44162 command_runner.go:130] > # reload'.
	I0603 11:32:13.561473   44162 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0603 11:32:13.561488   44162 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0603 11:32:13.561498   44162 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0603 11:32:13.561510   44162 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0603 11:32:13.561517   44162 command_runner.go:130] > [crio]
	I0603 11:32:13.561532   44162 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0603 11:32:13.561537   44162 command_runner.go:130] > # containers images, in this directory.
	I0603 11:32:13.561546   44162 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0603 11:32:13.561561   44162 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0603 11:32:13.561674   44162 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0603 11:32:13.561698   44162 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0603 11:32:13.561899   44162 command_runner.go:130] > # imagestore = ""
	I0603 11:32:13.561915   44162 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0603 11:32:13.561926   44162 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0603 11:32:13.562031   44162 command_runner.go:130] > storage_driver = "overlay"
	I0603 11:32:13.562083   44162 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0603 11:32:13.562100   44162 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0603 11:32:13.562106   44162 command_runner.go:130] > storage_option = [
	I0603 11:32:13.562207   44162 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0603 11:32:13.562284   44162 command_runner.go:130] > ]
	I0603 11:32:13.562299   44162 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0603 11:32:13.562309   44162 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0603 11:32:13.562590   44162 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0603 11:32:13.562605   44162 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0603 11:32:13.562615   44162 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0603 11:32:13.562623   44162 command_runner.go:130] > # always happen on a node reboot
	I0603 11:32:13.562871   44162 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0603 11:32:13.562893   44162 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0603 11:32:13.562904   44162 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0603 11:32:13.562913   44162 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0603 11:32:13.562996   44162 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0603 11:32:13.563015   44162 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0603 11:32:13.563027   44162 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0603 11:32:13.563252   44162 command_runner.go:130] > # internal_wipe = true
	I0603 11:32:13.563270   44162 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0603 11:32:13.563284   44162 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0603 11:32:13.563622   44162 command_runner.go:130] > # internal_repair = false
	I0603 11:32:13.563643   44162 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0603 11:32:13.563655   44162 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0603 11:32:13.563664   44162 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0603 11:32:13.563862   44162 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0603 11:32:13.563879   44162 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0603 11:32:13.563885   44162 command_runner.go:130] > [crio.api]
	I0603 11:32:13.563894   44162 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0603 11:32:13.564109   44162 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0603 11:32:13.564120   44162 command_runner.go:130] > # IP address on which the stream server will listen.
	I0603 11:32:13.564382   44162 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0603 11:32:13.564400   44162 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0603 11:32:13.564410   44162 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0603 11:32:13.564816   44162 command_runner.go:130] > # stream_port = "0"
	I0603 11:32:13.564832   44162 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0603 11:32:13.565097   44162 command_runner.go:130] > # stream_enable_tls = false
	I0603 11:32:13.565111   44162 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0603 11:32:13.565327   44162 command_runner.go:130] > # stream_idle_timeout = ""
	I0603 11:32:13.565344   44162 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0603 11:32:13.565354   44162 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0603 11:32:13.565364   44162 command_runner.go:130] > # minutes.
	I0603 11:32:13.565750   44162 command_runner.go:130] > # stream_tls_cert = ""
	I0603 11:32:13.565772   44162 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0603 11:32:13.565780   44162 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0603 11:32:13.565934   44162 command_runner.go:130] > # stream_tls_key = ""
	I0603 11:32:13.565950   44162 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0603 11:32:13.565960   44162 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0603 11:32:13.565979   44162 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0603 11:32:13.566163   44162 command_runner.go:130] > # stream_tls_ca = ""
	I0603 11:32:13.566181   44162 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 11:32:13.566335   44162 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0603 11:32:13.566353   44162 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 11:32:13.566558   44162 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0603 11:32:13.566574   44162 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0603 11:32:13.566583   44162 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0603 11:32:13.566591   44162 command_runner.go:130] > [crio.runtime]
	I0603 11:32:13.566600   44162 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0603 11:32:13.566612   44162 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0603 11:32:13.566622   44162 command_runner.go:130] > # "nofile=1024:2048"
	I0603 11:32:13.566635   44162 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0603 11:32:13.566787   44162 command_runner.go:130] > # default_ulimits = [
	I0603 11:32:13.567103   44162 command_runner.go:130] > # ]
	I0603 11:32:13.567123   44162 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0603 11:32:13.567372   44162 command_runner.go:130] > # no_pivot = false
	I0603 11:32:13.567388   44162 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0603 11:32:13.567400   44162 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0603 11:32:13.567599   44162 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0603 11:32:13.567613   44162 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0603 11:32:13.567620   44162 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0603 11:32:13.567630   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 11:32:13.567642   44162 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0603 11:32:13.567649   44162 command_runner.go:130] > # Cgroup setting for conmon
	I0603 11:32:13.567659   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0603 11:32:13.567664   44162 command_runner.go:130] > conmon_cgroup = "pod"
	I0603 11:32:13.567670   44162 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0603 11:32:13.567678   44162 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0603 11:32:13.567684   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 11:32:13.567689   44162 command_runner.go:130] > conmon_env = [
	I0603 11:32:13.567756   44162 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 11:32:13.567768   44162 command_runner.go:130] > ]
	I0603 11:32:13.567776   44162 command_runner.go:130] > # Additional environment variables to set for all the
	I0603 11:32:13.567785   44162 command_runner.go:130] > # containers. These are overridden if set in the
	I0603 11:32:13.567797   44162 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0603 11:32:13.567808   44162 command_runner.go:130] > # default_env = [
	I0603 11:32:13.567814   44162 command_runner.go:130] > # ]
	I0603 11:32:13.567824   44162 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0603 11:32:13.567838   44162 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0603 11:32:13.567847   44162 command_runner.go:130] > # selinux = false
	I0603 11:32:13.567857   44162 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0603 11:32:13.567870   44162 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0603 11:32:13.567884   44162 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0603 11:32:13.567891   44162 command_runner.go:130] > # seccomp_profile = ""
	I0603 11:32:13.567903   44162 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0603 11:32:13.567917   44162 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0603 11:32:13.567930   44162 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0603 11:32:13.567941   44162 command_runner.go:130] > # which might increase security.
	I0603 11:32:13.567949   44162 command_runner.go:130] > # This option is currently deprecated,
	I0603 11:32:13.567961   44162 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0603 11:32:13.567971   44162 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0603 11:32:13.567994   44162 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0603 11:32:13.568003   44162 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0603 11:32:13.568011   44162 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0603 11:32:13.568017   44162 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0603 11:32:13.568029   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.568035   44162 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0603 11:32:13.568040   44162 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0603 11:32:13.568051   44162 command_runner.go:130] > # the cgroup blockio controller.
	I0603 11:32:13.568059   44162 command_runner.go:130] > # blockio_config_file = ""
	I0603 11:32:13.568068   44162 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0603 11:32:13.568072   44162 command_runner.go:130] > # blockio parameters.
	I0603 11:32:13.568076   44162 command_runner.go:130] > # blockio_reload = false
	I0603 11:32:13.568082   44162 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0603 11:32:13.568089   44162 command_runner.go:130] > # irqbalance daemon.
	I0603 11:32:13.568095   44162 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0603 11:32:13.568103   44162 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0603 11:32:13.568111   44162 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0603 11:32:13.568120   44162 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0603 11:32:13.568126   44162 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0603 11:32:13.568134   44162 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0603 11:32:13.568142   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.568146   44162 command_runner.go:130] > # rdt_config_file = ""
	I0603 11:32:13.568153   44162 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0603 11:32:13.568157   44162 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0603 11:32:13.568185   44162 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0603 11:32:13.568192   44162 command_runner.go:130] > # separate_pull_cgroup = ""
	I0603 11:32:13.568198   44162 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0603 11:32:13.568204   44162 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0603 11:32:13.568212   44162 command_runner.go:130] > # will be added.
	I0603 11:32:13.568219   44162 command_runner.go:130] > # default_capabilities = [
	I0603 11:32:13.568248   44162 command_runner.go:130] > # 	"CHOWN",
	I0603 11:32:13.568259   44162 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0603 11:32:13.568266   44162 command_runner.go:130] > # 	"FSETID",
	I0603 11:32:13.568274   44162 command_runner.go:130] > # 	"FOWNER",
	I0603 11:32:13.568281   44162 command_runner.go:130] > # 	"SETGID",
	I0603 11:32:13.568292   44162 command_runner.go:130] > # 	"SETUID",
	I0603 11:32:13.568314   44162 command_runner.go:130] > # 	"SETPCAP",
	I0603 11:32:13.568324   44162 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0603 11:32:13.568331   44162 command_runner.go:130] > # 	"KILL",
	I0603 11:32:13.568337   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568344   44162 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0603 11:32:13.568357   44162 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0603 11:32:13.568376   44162 command_runner.go:130] > # add_inheritable_capabilities = false
	I0603 11:32:13.568387   44162 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0603 11:32:13.568392   44162 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 11:32:13.568399   44162 command_runner.go:130] > default_sysctls = [
	I0603 11:32:13.568404   44162 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0603 11:32:13.568410   44162 command_runner.go:130] > ]
	I0603 11:32:13.568414   44162 command_runner.go:130] > # List of devices on the host that a
	I0603 11:32:13.568422   44162 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0603 11:32:13.568429   44162 command_runner.go:130] > # allowed_devices = [
	I0603 11:32:13.568433   44162 command_runner.go:130] > # 	"/dev/fuse",
	I0603 11:32:13.568439   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568443   44162 command_runner.go:130] > # List of additional devices. specified as
	I0603 11:32:13.568452   44162 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0603 11:32:13.568460   44162 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0603 11:32:13.568465   44162 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 11:32:13.568471   44162 command_runner.go:130] > # additional_devices = [
	I0603 11:32:13.568475   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568480   44162 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0603 11:32:13.568487   44162 command_runner.go:130] > # cdi_spec_dirs = [
	I0603 11:32:13.568594   44162 command_runner.go:130] > # 	"/etc/cdi",
	I0603 11:32:13.568610   44162 command_runner.go:130] > # 	"/var/run/cdi",
	I0603 11:32:13.568615   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568627   44162 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0603 11:32:13.568647   44162 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0603 11:32:13.568657   44162 command_runner.go:130] > # Defaults to false.
	I0603 11:32:13.568669   44162 command_runner.go:130] > # device_ownership_from_security_context = false
	I0603 11:32:13.568682   44162 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0603 11:32:13.568695   44162 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0603 11:32:13.568704   44162 command_runner.go:130] > # hooks_dir = [
	I0603 11:32:13.568713   44162 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0603 11:32:13.568721   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568731   44162 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0603 11:32:13.568744   44162 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0603 11:32:13.568756   44162 command_runner.go:130] > # its default mounts from the following two files:
	I0603 11:32:13.568762   44162 command_runner.go:130] > #
	I0603 11:32:13.568772   44162 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0603 11:32:13.568795   44162 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0603 11:32:13.568808   44162 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0603 11:32:13.568815   44162 command_runner.go:130] > #
	I0603 11:32:13.568827   44162 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0603 11:32:13.568841   44162 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0603 11:32:13.568855   44162 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0603 11:32:13.568866   44162 command_runner.go:130] > #      only add mounts it finds in this file.
	I0603 11:32:13.568875   44162 command_runner.go:130] > #
	I0603 11:32:13.568882   44162 command_runner.go:130] > # default_mounts_file = ""
	I0603 11:32:13.568895   44162 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0603 11:32:13.568908   44162 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0603 11:32:13.568917   44162 command_runner.go:130] > pids_limit = 1024
	I0603 11:32:13.568927   44162 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0603 11:32:13.568941   44162 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0603 11:32:13.568954   44162 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0603 11:32:13.568971   44162 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0603 11:32:13.568981   44162 command_runner.go:130] > # log_size_max = -1
	I0603 11:32:13.568992   44162 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0603 11:32:13.569004   44162 command_runner.go:130] > # log_to_journald = false
	I0603 11:32:13.569014   44162 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0603 11:32:13.569022   44162 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0603 11:32:13.569030   44162 command_runner.go:130] > # Path to directory for container attach sockets.
	I0603 11:32:13.569041   44162 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0603 11:32:13.569050   44162 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0603 11:32:13.569062   44162 command_runner.go:130] > # bind_mount_prefix = ""
	I0603 11:32:13.569078   44162 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0603 11:32:13.569089   44162 command_runner.go:130] > # read_only = false
	I0603 11:32:13.569099   44162 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0603 11:32:13.569111   44162 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0603 11:32:13.569118   44162 command_runner.go:130] > # live configuration reload.
	I0603 11:32:13.569128   44162 command_runner.go:130] > # log_level = "info"
	I0603 11:32:13.569138   44162 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0603 11:32:13.569150   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.569159   44162 command_runner.go:130] > # log_filter = ""
	I0603 11:32:13.569169   44162 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0603 11:32:13.569182   44162 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0603 11:32:13.569201   44162 command_runner.go:130] > # separated by comma.
	I0603 11:32:13.569219   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569228   44162 command_runner.go:130] > # uid_mappings = ""
	I0603 11:32:13.569240   44162 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0603 11:32:13.569253   44162 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0603 11:32:13.569262   44162 command_runner.go:130] > # separated by comma.
	I0603 11:32:13.569275   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569285   44162 command_runner.go:130] > # gid_mappings = ""
	I0603 11:32:13.569296   44162 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0603 11:32:13.569309   44162 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 11:32:13.569319   44162 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 11:32:13.569333   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569343   44162 command_runner.go:130] > # minimum_mappable_uid = -1
	I0603 11:32:13.569353   44162 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0603 11:32:13.569365   44162 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 11:32:13.569374   44162 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 11:32:13.569389   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569400   44162 command_runner.go:130] > # minimum_mappable_gid = -1
	I0603 11:32:13.569410   44162 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0603 11:32:13.569423   44162 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0603 11:32:13.569436   44162 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0603 11:32:13.569459   44162 command_runner.go:130] > # ctr_stop_timeout = 30
	I0603 11:32:13.569475   44162 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0603 11:32:13.569487   44162 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0603 11:32:13.569498   44162 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0603 11:32:13.569508   44162 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0603 11:32:13.569514   44162 command_runner.go:130] > drop_infra_ctr = false
	I0603 11:32:13.569524   44162 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0603 11:32:13.569536   44162 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0603 11:32:13.569549   44162 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0603 11:32:13.569559   44162 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0603 11:32:13.569576   44162 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0603 11:32:13.569589   44162 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0603 11:32:13.569602   44162 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0603 11:32:13.569614   44162 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0603 11:32:13.569624   44162 command_runner.go:130] > # shared_cpuset = ""
	I0603 11:32:13.569643   44162 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0603 11:32:13.569657   44162 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0603 11:32:13.569664   44162 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0603 11:32:13.569677   44162 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0603 11:32:13.569687   44162 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0603 11:32:13.569699   44162 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0603 11:32:13.569712   44162 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0603 11:32:13.569722   44162 command_runner.go:130] > # enable_criu_support = false
	I0603 11:32:13.569730   44162 command_runner.go:130] > # Enable/disable the generation of the container,
	I0603 11:32:13.569741   44162 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0603 11:32:13.569752   44162 command_runner.go:130] > # enable_pod_events = false
	I0603 11:32:13.569766   44162 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 11:32:13.569780   44162 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 11:32:13.569792   44162 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0603 11:32:13.569803   44162 command_runner.go:130] > # default_runtime = "runc"
	I0603 11:32:13.569815   44162 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0603 11:32:13.569830   44162 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0603 11:32:13.569847   44162 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0603 11:32:13.569857   44162 command_runner.go:130] > # creation as a file is not desired either.
	I0603 11:32:13.569878   44162 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0603 11:32:13.569890   44162 command_runner.go:130] > # the hostname is being managed dynamically.
	I0603 11:32:13.569900   44162 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0603 11:32:13.569907   44162 command_runner.go:130] > # ]
	I0603 11:32:13.569917   44162 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0603 11:32:13.569929   44162 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0603 11:32:13.569939   44162 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0603 11:32:13.569951   44162 command_runner.go:130] > # Each entry in the table should follow the format:
	I0603 11:32:13.569960   44162 command_runner.go:130] > #
	I0603 11:32:13.569971   44162 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0603 11:32:13.569980   44162 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0603 11:32:13.570042   44162 command_runner.go:130] > # runtime_type = "oci"
	I0603 11:32:13.570056   44162 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0603 11:32:13.570064   44162 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0603 11:32:13.570079   44162 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0603 11:32:13.570090   44162 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0603 11:32:13.570096   44162 command_runner.go:130] > # monitor_env = []
	I0603 11:32:13.570115   44162 command_runner.go:130] > # privileged_without_host_devices = false
	I0603 11:32:13.570125   44162 command_runner.go:130] > # allowed_annotations = []
	I0603 11:32:13.570134   44162 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0603 11:32:13.570143   44162 command_runner.go:130] > # Where:
	I0603 11:32:13.570151   44162 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0603 11:32:13.570162   44162 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0603 11:32:13.570171   44162 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0603 11:32:13.570186   44162 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0603 11:32:13.570196   44162 command_runner.go:130] > #   in $PATH.
	I0603 11:32:13.570208   44162 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0603 11:32:13.570219   44162 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0603 11:32:13.570231   44162 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0603 11:32:13.570239   44162 command_runner.go:130] > #   state.
	I0603 11:32:13.570252   44162 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0603 11:32:13.570264   44162 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0603 11:32:13.570276   44162 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0603 11:32:13.570288   44162 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0603 11:32:13.570301   44162 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0603 11:32:13.570315   44162 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0603 11:32:13.570325   44162 command_runner.go:130] > #   The currently recognized values are:
	I0603 11:32:13.570335   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0603 11:32:13.570348   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0603 11:32:13.570357   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0603 11:32:13.570370   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0603 11:32:13.570384   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0603 11:32:13.570397   44162 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0603 11:32:13.570409   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0603 11:32:13.570422   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0603 11:32:13.570451   44162 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0603 11:32:13.570468   44162 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0603 11:32:13.570479   44162 command_runner.go:130] > #   deprecated option "conmon".
	I0603 11:32:13.570493   44162 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0603 11:32:13.570506   44162 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0603 11:32:13.570519   44162 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0603 11:32:13.570531   44162 command_runner.go:130] > #   should be moved to the container's cgroup
	I0603 11:32:13.570544   44162 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0603 11:32:13.570563   44162 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0603 11:32:13.570577   44162 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0603 11:32:13.570589   44162 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0603 11:32:13.570597   44162 command_runner.go:130] > #
	I0603 11:32:13.570605   44162 command_runner.go:130] > # Using the seccomp notifier feature:
	I0603 11:32:13.570613   44162 command_runner.go:130] > #
	I0603 11:32:13.570624   44162 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0603 11:32:13.570637   44162 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0603 11:32:13.570645   44162 command_runner.go:130] > #
	I0603 11:32:13.570658   44162 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0603 11:32:13.570671   44162 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0603 11:32:13.570678   44162 command_runner.go:130] > #
	I0603 11:32:13.570688   44162 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0603 11:32:13.570696   44162 command_runner.go:130] > # feature.
	I0603 11:32:13.570705   44162 command_runner.go:130] > #
	I0603 11:32:13.570718   44162 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0603 11:32:13.570731   44162 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0603 11:32:13.570744   44162 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0603 11:32:13.570757   44162 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0603 11:32:13.570770   44162 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0603 11:32:13.570778   44162 command_runner.go:130] > #
	I0603 11:32:13.570790   44162 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0603 11:32:13.570803   44162 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0603 11:32:13.570811   44162 command_runner.go:130] > #
	I0603 11:32:13.570822   44162 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0603 11:32:13.570834   44162 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0603 11:32:13.570841   44162 command_runner.go:130] > #
	I0603 11:32:13.570853   44162 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0603 11:32:13.570862   44162 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0603 11:32:13.570871   44162 command_runner.go:130] > # limitation.
	I0603 11:32:13.570880   44162 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0603 11:32:13.570888   44162 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0603 11:32:13.570897   44162 command_runner.go:130] > runtime_type = "oci"
	I0603 11:32:13.570906   44162 command_runner.go:130] > runtime_root = "/run/runc"
	I0603 11:32:13.570916   44162 command_runner.go:130] > runtime_config_path = ""
	I0603 11:32:13.570926   44162 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0603 11:32:13.570944   44162 command_runner.go:130] > monitor_cgroup = "pod"
	I0603 11:32:13.570954   44162 command_runner.go:130] > monitor_exec_cgroup = ""
	I0603 11:32:13.570964   44162 command_runner.go:130] > monitor_env = [
	I0603 11:32:13.570976   44162 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 11:32:13.570984   44162 command_runner.go:130] > ]
	I0603 11:32:13.570994   44162 command_runner.go:130] > privileged_without_host_devices = false
	I0603 11:32:13.571008   44162 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0603 11:32:13.571020   44162 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0603 11:32:13.571033   44162 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0603 11:32:13.571061   44162 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0603 11:32:13.571083   44162 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0603 11:32:13.571095   44162 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0603 11:32:13.571111   44162 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0603 11:32:13.571124   44162 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0603 11:32:13.571135   44162 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0603 11:32:13.571148   44162 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0603 11:32:13.571156   44162 command_runner.go:130] > # Example:
	I0603 11:32:13.571163   44162 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0603 11:32:13.571170   44162 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0603 11:32:13.571176   44162 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0603 11:32:13.571183   44162 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0603 11:32:13.571188   44162 command_runner.go:130] > # cpuset = 0
	I0603 11:32:13.571193   44162 command_runner.go:130] > # cpushares = "0-1"
	I0603 11:32:13.571198   44162 command_runner.go:130] > # Where:
	I0603 11:32:13.571204   44162 command_runner.go:130] > # The workload name is workload-type.
	I0603 11:32:13.571214   44162 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0603 11:32:13.571221   44162 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0603 11:32:13.571230   44162 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0603 11:32:13.571241   44162 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0603 11:32:13.571249   44162 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0603 11:32:13.571256   44162 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0603 11:32:13.571266   44162 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0603 11:32:13.571273   44162 command_runner.go:130] > # Default value is set to true
	I0603 11:32:13.571279   44162 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0603 11:32:13.571288   44162 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0603 11:32:13.571295   44162 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0603 11:32:13.571309   44162 command_runner.go:130] > # Default value is set to 'false'
	I0603 11:32:13.571315   44162 command_runner.go:130] > # disable_hostport_mapping = false
	I0603 11:32:13.571325   44162 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0603 11:32:13.571330   44162 command_runner.go:130] > #
	I0603 11:32:13.571343   44162 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0603 11:32:13.571354   44162 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0603 11:32:13.571367   44162 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0603 11:32:13.571379   44162 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0603 11:32:13.571391   44162 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0603 11:32:13.571401   44162 command_runner.go:130] > [crio.image]
	I0603 11:32:13.571416   44162 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0603 11:32:13.571426   44162 command_runner.go:130] > # default_transport = "docker://"
	I0603 11:32:13.571439   44162 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0603 11:32:13.571452   44162 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0603 11:32:13.571460   44162 command_runner.go:130] > # global_auth_file = ""
	I0603 11:32:13.571469   44162 command_runner.go:130] > # The image used to instantiate infra containers.
	I0603 11:32:13.571478   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.571488   44162 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0603 11:32:13.571505   44162 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0603 11:32:13.571518   44162 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0603 11:32:13.571528   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.571538   44162 command_runner.go:130] > # pause_image_auth_file = ""
	I0603 11:32:13.571551   44162 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0603 11:32:13.571562   44162 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0603 11:32:13.571574   44162 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0603 11:32:13.571584   44162 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0603 11:32:13.571593   44162 command_runner.go:130] > # pause_command = "/pause"
	I0603 11:32:13.571605   44162 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0603 11:32:13.571617   44162 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0603 11:32:13.571630   44162 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0603 11:32:13.571643   44162 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0603 11:32:13.571654   44162 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0603 11:32:13.571665   44162 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0603 11:32:13.571674   44162 command_runner.go:130] > # pinned_images = [
	I0603 11:32:13.571682   44162 command_runner.go:130] > # ]
	I0603 11:32:13.571693   44162 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0603 11:32:13.571714   44162 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0603 11:32:13.571728   44162 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0603 11:32:13.571739   44162 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0603 11:32:13.571749   44162 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0603 11:32:13.571758   44162 command_runner.go:130] > # signature_policy = ""
	I0603 11:32:13.571769   44162 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0603 11:32:13.571781   44162 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0603 11:32:13.571793   44162 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0603 11:32:13.571804   44162 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0603 11:32:13.571815   44162 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0603 11:32:13.571826   44162 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0603 11:32:13.571838   44162 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0603 11:32:13.571850   44162 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0603 11:32:13.571860   44162 command_runner.go:130] > # changing them here.
	I0603 11:32:13.571870   44162 command_runner.go:130] > # insecure_registries = [
	I0603 11:32:13.571878   44162 command_runner.go:130] > # ]
	I0603 11:32:13.571890   44162 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0603 11:32:13.571900   44162 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0603 11:32:13.571910   44162 command_runner.go:130] > # image_volumes = "mkdir"
	I0603 11:32:13.571920   44162 command_runner.go:130] > # Temporary directory to use for storing big files
	I0603 11:32:13.571930   44162 command_runner.go:130] > # big_files_temporary_dir = ""
	I0603 11:32:13.571942   44162 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0603 11:32:13.571951   44162 command_runner.go:130] > # CNI plugins.
	I0603 11:32:13.571959   44162 command_runner.go:130] > [crio.network]
	I0603 11:32:13.571972   44162 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0603 11:32:13.571985   44162 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0603 11:32:13.571993   44162 command_runner.go:130] > # cni_default_network = ""
	I0603 11:32:13.572002   44162 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0603 11:32:13.572009   44162 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0603 11:32:13.572014   44162 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0603 11:32:13.572021   44162 command_runner.go:130] > # plugin_dirs = [
	I0603 11:32:13.572025   44162 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0603 11:32:13.572030   44162 command_runner.go:130] > # ]
	I0603 11:32:13.572036   44162 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0603 11:32:13.572043   44162 command_runner.go:130] > [crio.metrics]
	I0603 11:32:13.572051   44162 command_runner.go:130] > # Globally enable or disable metrics support.
	I0603 11:32:13.572073   44162 command_runner.go:130] > enable_metrics = true
	I0603 11:32:13.572084   44162 command_runner.go:130] > # Specify enabled metrics collectors.
	I0603 11:32:13.572091   44162 command_runner.go:130] > # Per default all metrics are enabled.
	I0603 11:32:13.572105   44162 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0603 11:32:13.572117   44162 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0603 11:32:13.572128   44162 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0603 11:32:13.572137   44162 command_runner.go:130] > # metrics_collectors = [
	I0603 11:32:13.572143   44162 command_runner.go:130] > # 	"operations",
	I0603 11:32:13.572154   44162 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0603 11:32:13.572162   44162 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0603 11:32:13.572170   44162 command_runner.go:130] > # 	"operations_errors",
	I0603 11:32:13.572178   44162 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0603 11:32:13.572187   44162 command_runner.go:130] > # 	"image_pulls_by_name",
	I0603 11:32:13.572196   44162 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0603 11:32:13.572206   44162 command_runner.go:130] > # 	"image_pulls_failures",
	I0603 11:32:13.572215   44162 command_runner.go:130] > # 	"image_pulls_successes",
	I0603 11:32:13.572224   44162 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0603 11:32:13.572233   44162 command_runner.go:130] > # 	"image_layer_reuse",
	I0603 11:32:13.572243   44162 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0603 11:32:13.572249   44162 command_runner.go:130] > # 	"containers_oom_total",
	I0603 11:32:13.572257   44162 command_runner.go:130] > # 	"containers_oom",
	I0603 11:32:13.572261   44162 command_runner.go:130] > # 	"processes_defunct",
	I0603 11:32:13.572267   44162 command_runner.go:130] > # 	"operations_total",
	I0603 11:32:13.572272   44162 command_runner.go:130] > # 	"operations_latency_seconds",
	I0603 11:32:13.572280   44162 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0603 11:32:13.572286   44162 command_runner.go:130] > # 	"operations_errors_total",
	I0603 11:32:13.572291   44162 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0603 11:32:13.572297   44162 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0603 11:32:13.572302   44162 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0603 11:32:13.572309   44162 command_runner.go:130] > # 	"image_pulls_success_total",
	I0603 11:32:13.572313   44162 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0603 11:32:13.572317   44162 command_runner.go:130] > # 	"containers_oom_count_total",
	I0603 11:32:13.572324   44162 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0603 11:32:13.572330   44162 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0603 11:32:13.572334   44162 command_runner.go:130] > # ]
	I0603 11:32:13.572341   44162 command_runner.go:130] > # The port on which the metrics server will listen.
	I0603 11:32:13.572354   44162 command_runner.go:130] > # metrics_port = 9090
	I0603 11:32:13.572365   44162 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0603 11:32:13.572375   44162 command_runner.go:130] > # metrics_socket = ""
	I0603 11:32:13.572385   44162 command_runner.go:130] > # The certificate for the secure metrics server.
	I0603 11:32:13.572398   44162 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0603 11:32:13.572411   44162 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0603 11:32:13.572422   44162 command_runner.go:130] > # certificate on any modification event.
	I0603 11:32:13.572429   44162 command_runner.go:130] > # metrics_cert = ""
	I0603 11:32:13.572440   44162 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0603 11:32:13.572452   44162 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0603 11:32:13.572462   44162 command_runner.go:130] > # metrics_key = ""
	I0603 11:32:13.572473   44162 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0603 11:32:13.572482   44162 command_runner.go:130] > [crio.tracing]
	I0603 11:32:13.572493   44162 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0603 11:32:13.572501   44162 command_runner.go:130] > # enable_tracing = false
	I0603 11:32:13.572506   44162 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0603 11:32:13.572513   44162 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0603 11:32:13.572519   44162 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0603 11:32:13.572526   44162 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0603 11:32:13.572530   44162 command_runner.go:130] > # CRI-O NRI configuration.
	I0603 11:32:13.572537   44162 command_runner.go:130] > [crio.nri]
	I0603 11:32:13.572541   44162 command_runner.go:130] > # Globally enable or disable NRI.
	I0603 11:32:13.572547   44162 command_runner.go:130] > # enable_nri = false
	I0603 11:32:13.572552   44162 command_runner.go:130] > # NRI socket to listen on.
	I0603 11:32:13.572559   44162 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0603 11:32:13.572563   44162 command_runner.go:130] > # NRI plugin directory to use.
	I0603 11:32:13.572570   44162 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0603 11:32:13.572575   44162 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0603 11:32:13.572581   44162 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0603 11:32:13.572587   44162 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0603 11:32:13.572594   44162 command_runner.go:130] > # nri_disable_connections = false
	I0603 11:32:13.572599   44162 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0603 11:32:13.572605   44162 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0603 11:32:13.572611   44162 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0603 11:32:13.572617   44162 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0603 11:32:13.572623   44162 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0603 11:32:13.572634   44162 command_runner.go:130] > [crio.stats]
	I0603 11:32:13.572641   44162 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0603 11:32:13.572647   44162 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0603 11:32:13.572653   44162 command_runner.go:130] > # stats_collection_period = 0
	I0603 11:32:13.572683   44162 command_runner.go:130] ! time="2024-06-03 11:32:13.534399797Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0603 11:32:13.572695   44162 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0603 11:32:13.572786   44162 cni.go:84] Creating CNI manager for ""
	I0603 11:32:13.572793   44162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 11:32:13.572803   44162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:32:13.572824   44162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-505550 NodeName:multinode-505550 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:32:13.572946   44162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-505550"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:32:13.572997   44162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:32:13.583135   44162 command_runner.go:130] > kubeadm
	I0603 11:32:13.583147   44162 command_runner.go:130] > kubectl
	I0603 11:32:13.583151   44162 command_runner.go:130] > kubelet
	I0603 11:32:13.583258   44162 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:32:13.583323   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 11:32:13.592639   44162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 11:32:13.609004   44162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:32:13.625214   44162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0603 11:32:13.641269   44162 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0603 11:32:13.645167   44162 command_runner.go:130] > 192.168.39.232	control-plane.minikube.internal
	I0603 11:32:13.645231   44162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:32:13.783390   44162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:32:13.798463   44162 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550 for IP: 192.168.39.232
	I0603 11:32:13.798485   44162 certs.go:194] generating shared ca certs ...
	I0603 11:32:13.798498   44162 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:32:13.798682   44162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:32:13.798745   44162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:32:13.798759   44162 certs.go:256] generating profile certs ...
	I0603 11:32:13.798858   44162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/client.key
	I0603 11:32:13.798942   44162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key.5ddf5b8c
	I0603 11:32:13.798990   44162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key
	I0603 11:32:13.799004   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:32:13.799023   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:32:13.799082   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:32:13.799102   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:32:13.799116   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:32:13.799134   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:32:13.799150   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:32:13.799166   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:32:13.799230   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:32:13.799268   44162 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:32:13.799280   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:32:13.799313   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:32:13.799366   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:32:13.799406   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:32:13.799458   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:32:13.799498   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:13.799518   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:32:13.799536   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:32:13.800062   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:32:13.824578   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:32:13.847736   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:32:13.871958   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:32:13.895332   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 11:32:13.918172   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 11:32:13.941077   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:32:13.964480   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 11:32:13.988110   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:32:14.011440   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:32:14.034547   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:32:14.057428   44162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:32:14.073974   44162 ssh_runner.go:195] Run: openssl version
	I0603 11:32:14.079523   44162 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 11:32:14.079687   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:32:14.090250   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094427   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094646   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094690   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.100162   44162 command_runner.go:130] > b5213941
	I0603 11:32:14.100234   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:32:14.109493   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:32:14.119950   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124300   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124357   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124403   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.129757   44162 command_runner.go:130] > 51391683
	I0603 11:32:14.129880   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:32:14.138935   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:32:14.149946   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154189   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154334   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154377   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.159818   44162 command_runner.go:130] > 3ec20f2e
	I0603 11:32:14.159998   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:32:14.169122   44162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:32:14.173390   44162 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:32:14.173409   44162 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 11:32:14.173414   44162 command_runner.go:130] > Device: 253,1	Inode: 8386582     Links: 1
	I0603 11:32:14.173421   44162 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 11:32:14.173427   44162 command_runner.go:130] > Access: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173435   44162 command_runner.go:130] > Modify: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173447   44162 command_runner.go:130] > Change: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173456   44162 command_runner.go:130] >  Birth: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173543   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:32:14.178999   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.179057   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:32:14.184371   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.184413   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:32:14.189727   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.189775   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:32:14.194905   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.195085   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:32:14.200370   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.200621   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:32:14.205998   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.206057   44162 kubeadm.go:391] StartCluster: {Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:32:14.206171   44162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:32:14.206233   44162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:32:14.244064   44162 command_runner.go:130] > 3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d
	I0603 11:32:14.244088   44162 command_runner.go:130] > 4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284
	I0603 11:32:14.244094   44162 command_runner.go:130] > 43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368
	I0603 11:32:14.244100   44162 command_runner.go:130] > d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8
	I0603 11:32:14.244106   44162 command_runner.go:130] > e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0
	I0603 11:32:14.244115   44162 command_runner.go:130] > 9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40
	I0603 11:32:14.244125   44162 command_runner.go:130] > 37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20
	I0603 11:32:14.244146   44162 command_runner.go:130] > 9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9
	I0603 11:32:14.244168   44162 cri.go:89] found id: "3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d"
	I0603 11:32:14.244178   44162 cri.go:89] found id: "4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284"
	I0603 11:32:14.244183   44162 cri.go:89] found id: "43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368"
	I0603 11:32:14.244185   44162 cri.go:89] found id: "d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8"
	I0603 11:32:14.244188   44162 cri.go:89] found id: "e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0"
	I0603 11:32:14.244191   44162 cri.go:89] found id: "9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40"
	I0603 11:32:14.244194   44162 cri.go:89] found id: "37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20"
	I0603 11:32:14.244196   44162 cri.go:89] found id: "9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9"
	I0603 11:32:14.244199   44162 cri.go:89] found id: ""
	I0603 11:32:14.244247   44162 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.028798961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414420028774214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53f913d4-352a-46ff-8892-0e1d29f14705 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.029279245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5ea4695-c649-41e2-89e9-cf7e69c68008 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.029366495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5ea4695-c649-41e2-89e9-cf7e69c68008 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.029796040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5ea4695-c649-41e2-89e9-cf7e69c68008 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.075383615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eab59a92-600d-4de9-9ad5-139163f35522 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.075458955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eab59a92-600d-4de9-9ad5-139163f35522 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.076531814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26f601e9-a813-4699-91d9-61504106d324 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.077226759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414420077201403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26f601e9-a813-4699-91d9-61504106d324 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.077858157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ce0c8fe-b920-4d88-b18e-ab8f42155592 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.077929529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ce0c8fe-b920-4d88-b18e-ab8f42155592 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.078280380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ce0c8fe-b920-4d88-b18e-ab8f42155592 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.124146899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95f78ccc-e343-4816-b6df-7783873b689f name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.124239556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95f78ccc-e343-4816-b6df-7783873b689f name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.125292154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6be02f5-64f6-42cc-b418-67fa4082f826 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.125777046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414420125753040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6be02f5-64f6-42cc-b418-67fa4082f826 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.126678297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc2cd472-cace-4389-9ecd-ceadabdf8219 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.126738460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc2cd472-cace-4389-9ecd-ceadabdf8219 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.128078609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc2cd472-cace-4389-9ecd-ceadabdf8219 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.176237426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85cca00e-1076-42be-a476-0365fdbeb970 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.176638913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85cca00e-1076-42be-a476-0365fdbeb970 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.178118666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a42a4f56-b73d-4613-a65b-7efcb684ce5d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.178633887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414420178498305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a42a4f56-b73d-4613-a65b-7efcb684ce5d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.179311999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b6fd1b0-0799-45e3-a948-620a8e575e85 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.179374714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b6fd1b0-0799-45e3-a948-620a8e575e85 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:33:40 multinode-505550 crio[2864]: time="2024-06-03 11:33:40.179787755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b6fd1b0-0799-45e3-a948-620a8e575e85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	df483073fa3fb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      46 seconds ago       Running             busybox                   1                   0318de4e55dcd       busybox-fc5497c4f-nrpnb
	cac8e61c82198       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   2e30d008cc8cc       kindnet-x9tml
	00339123e1f21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   99df45d71c3a5       coredns-7db6d8ff4d-ljnxn
	7a7dc7ea2138c       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   429c3c650d7e5       kube-proxy-nsx2s
	36f3491249f81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e4acd7567116c       storage-provisioner
	ae066b6e74205       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   db96f61e1e178       etcd-multinode-505550
	1aa2017e346a1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   c41a19e00620a       kube-scheduler-multinode-505550
	33e99de01a6dc       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   18f35e40ddee7       kube-controller-manager-multinode-505550
	b65f722b1ce16       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   2790e7ea8fac8       kube-apiserver-multinode-505550
	5f5e11f764966       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   9538cd6a41f17       busybox-fc5497c4f-nrpnb
	3e620850e58c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   36bdd67bb32f9       coredns-7db6d8ff4d-ljnxn
	4e706590e463e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   5b00ced87c174       storage-provisioner
	43e352950fd35       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   b1e8e3910984c       kindnet-x9tml
	d6635384a19f3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   468d52378470e       kube-proxy-nsx2s
	e609ee17b90fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   4d2cf60baa750       etcd-multinode-505550
	9829e23092038       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   cce119ae28b41       kube-apiserver-multinode-505550
	37aee72ac00be       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   0ea297b461475       kube-scheduler-multinode-505550
	9bc2d863a2009       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   ec1b07c24772f       kube-controller-manager-multinode-505550
	
	
	==> coredns [00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56385 - 33422 "HINFO IN 3413088731930338785.2608471205893518960. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043500879s
	
	
	==> coredns [3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d] <==
	[INFO] 10.244.0.3:33000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001908249s
	[INFO] 10.244.0.3:43787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094769s
	[INFO] 10.244.0.3:60770 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006635s
	[INFO] 10.244.0.3:49510 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001470507s
	[INFO] 10.244.0.3:40767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059024s
	[INFO] 10.244.0.3:47550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097486s
	[INFO] 10.244.0.3:60616 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051459s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129029s
	[INFO] 10.244.1.2:47437 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012363s
	[INFO] 10.244.1.2:56690 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097222s
	[INFO] 10.244.1.2:41948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064402s
	[INFO] 10.244.0.3:54434 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115133s
	[INFO] 10.244.0.3:44435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056331s
	[INFO] 10.244.0.3:42535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058351s
	[INFO] 10.244.0.3:47369 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006685s
	[INFO] 10.244.1.2:39250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158419s
	[INFO] 10.244.1.2:41088 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000135208s
	[INFO] 10.244.1.2:41901 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131272s
	[INFO] 10.244.1.2:59936 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074949s
	[INFO] 10.244.0.3:42361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153313s
	[INFO] 10.244.0.3:42372 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047758s
	[INFO] 10.244.0.3:49151 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060567s
	[INFO] 10.244.0.3:33056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000032465s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-505550
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-505550
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-505550
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T11_26_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:26:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-505550
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:33:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:26:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-505550
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 712f4261d61f4e67b23a2fd880b5e68d
	  System UUID:                712f4261-d61f-4e67-b23a-2fd880b5e68d
	  Boot ID:                    22b55f13-f8d5-4bac-ac0b-f32e25000366
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nrpnb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-7db6d8ff4d-ljnxn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m25s
	  kube-system                 etcd-multinode-505550                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m38s
	  kube-system                 kindnet-x9tml                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m24s
	  kube-system                 kube-apiserver-multinode-505550             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-controller-manager-multinode-505550    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-proxy-nsx2s                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-scheduler-multinode-505550             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m23s              kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m38s              kubelet          Node multinode-505550 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s              kubelet          Node multinode-505550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s              kubelet          Node multinode-505550 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m38s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m24s              node-controller  Node multinode-505550 event: Registered Node multinode-505550 in Controller
	  Normal  NodeReady                7m19s              kubelet          Node multinode-505550 status is now: NodeReady
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s (x8 over 85s)  kubelet          Node multinode-505550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 85s)  kubelet          Node multinode-505550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 85s)  kubelet          Node multinode-505550 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s                node-controller  Node multinode-505550 event: Registered Node multinode-505550 in Controller
	
	
	Name:               multinode-505550-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-505550-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-505550
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_33_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:32:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-505550-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:32:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:32:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:32:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:33:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    multinode-505550-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2475f98ffcc4239871aee6fac2e077e
	  System UUID:                a2475f98-ffcc-4239-871a-ee6fac2e077e
	  Boot ID:                    4b5ea6fa-dc35-4675-9b3f-59c29a763ee0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-85kb9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-tgk6j              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m49s
	  kube-system                 kube-proxy-65rk5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m43s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m49s (x2 over 6m49s)  kubelet     Node multinode-505550-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s (x2 over 6m49s)  kubelet     Node multinode-505550-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s (x2 over 6m49s)  kubelet     Node multinode-505550-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m39s                  kubelet     Node multinode-505550-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-505550-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-505550-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-505550-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-505550-m02 status is now: NodeReady
	
	
	Name:               multinode-505550-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-505550-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-505550
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_33_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:33:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-505550-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:33:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:33:37 +0000   Mon, 03 Jun 2024 11:33:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:33:37 +0000   Mon, 03 Jun 2024 11:33:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:33:37 +0000   Mon, 03 Jun 2024 11:33:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:33:37 +0000   Mon, 03 Jun 2024 11:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-505550-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 176f732bd1384cac95a93df149d038f5
	  System UUID:                176f732b-d138-4cac-95a9-3df149d038f5
	  Boot ID:                    750e8c06-3247-479f-a712-593d8a1846e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bbh8q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-proxy-xmrf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  Starting                 5m56s                  kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m2s (x2 over 6m2s)    kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x2 over 6m2s)    kubelet          Node multinode-505550-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x2 over 6m2s)    kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m52s                  kubelet          Node multinode-505550-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node multinode-505550-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m13s                  kubelet          Node multinode-505550-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-505550-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-505550-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-505550-m03 event: Registered Node multinode-505550-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-505550-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062039] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.183313] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.110159] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.272008] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.077426] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.988776] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.056935] kauditd_printk_skb: 158 callbacks suppressed
	[Jun 3 11:26] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.085957] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.097188] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.477717] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.195871] kauditd_printk_skb: 57 callbacks suppressed
	[Jun 3 11:27] kauditd_printk_skb: 17 callbacks suppressed
	[Jun 3 11:32] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.149446] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.174460] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.144642] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.270599] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +8.308450] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +0.083136] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.879760] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +4.696325] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.468579] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.228723] systemd-fstab-generator[3890]: Ignoring "noauto" option for root device
	[ +19.051296] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320] <==
	{"level":"info","ts":"2024-06-03T11:32:17.104292Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T11:32:17.08276Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-03T11:32:17.082905Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.107678Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.107712Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.083326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a switched to configuration voters=(5007548384377851754)"}
	{"level":"info","ts":"2024-06-03T11:32:17.107903Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","added-peer-id":"457e62b9766c4f6a","added-peer-peer-urls":["https://192.168.39.232:2380"]}
	{"level":"info","ts":"2024-06-03T11:32:17.083336Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:32:17.109666Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:32:17.109813Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:32:17.109861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:32:18.490798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.490896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.490959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgPreVoteResp from 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.491004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.491028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgVoteResp from 457e62b9766c4f6a at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.49106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became leader at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.491096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457e62b9766c4f6a elected leader 457e62b9766c4f6a at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.496282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:32:18.496207Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"457e62b9766c4f6a","local-member-attributes":"{Name:multinode-505550 ClientURLs:[https://192.168.39.232:2379]}","request-path":"/0/members/457e62b9766c4f6a/attributes","cluster-id":"6f6de64b207a208a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:32:18.497082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:32:18.497696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:32:18.497734Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:32:18.499468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T11:32:18.501283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	
	
	==> etcd [e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0] <==
	{"level":"info","ts":"2024-06-03T11:25:58.416356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:25:58.416393Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:25:58.433953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:25:58.434009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:25:58.450671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	{"level":"info","ts":"2024-06-03T11:26:51.900068Z","caller":"traceutil/trace.go:171","msg":"trace[1861990371] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"248.858979ms","start":"2024-06-03T11:26:51.651162Z","end":"2024-06-03T11:26:51.900021Z","steps":["trace[1861990371] 'process raft request'  (duration: 248.71955ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:26:51.901706Z","caller":"traceutil/trace.go:171","msg":"trace[1620774965] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"167.377627ms","start":"2024-06-03T11:26:51.734316Z","end":"2024-06-03T11:26:51.901694Z","steps":["trace[1620774965] 'process raft request'  (duration: 167.209836ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:38.99007Z","caller":"traceutil/trace.go:171","msg":"trace[676529011] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"232.734135ms","start":"2024-06-03T11:27:38.75732Z","end":"2024-06-03T11:27:38.990054Z","steps":["trace[676529011] 'process raft request'  (duration: 232.608636ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:38.990517Z","caller":"traceutil/trace.go:171","msg":"trace[1676020132] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:605; }","duration":"170.13301ms","start":"2024-06-03T11:27:38.820358Z","end":"2024-06-03T11:27:38.990491Z","steps":["trace[1676020132] 'read index received'  (duration: 170.128073ms)","trace[1676020132] 'applied index is now lower than readState.Index'  (duration: 4.048µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T11:27:38.990821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.380797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-505550-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T11:27:38.990922Z","caller":"traceutil/trace.go:171","msg":"trace[310359396] range","detail":"{range_begin:/registry/minions/multinode-505550-m03; range_end:; response_count:0; response_revision:567; }","duration":"170.573602ms","start":"2024-06-03T11:27:38.820333Z","end":"2024-06-03T11:27:38.990907Z","steps":["trace[310359396] 'agreement among raft nodes before linearized reading'  (duration: 170.349278ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:39.004672Z","caller":"traceutil/trace.go:171","msg":"trace[171574390] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"172.877263ms","start":"2024-06-03T11:27:38.83178Z","end":"2024-06-03T11:27:39.004657Z","steps":["trace[171574390] 'process raft request'  (duration: 172.273394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T11:27:44.914002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.602437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-xmrf4\" ","response":"range_response_count:1 size:4657"}
	{"level":"info","ts":"2024-06-03T11:27:44.914075Z","caller":"traceutil/trace.go:171","msg":"trace[1736341909] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-xmrf4; range_end:; response_count:1; response_revision:609; }","duration":"105.692204ms","start":"2024-06-03T11:27:44.80836Z","end":"2024-06-03T11:27:44.914053Z","steps":["trace[1736341909] 'range keys from in-memory index tree'  (duration: 105.464536ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:45.166729Z","caller":"traceutil/trace.go:171","msg":"trace[1628375869] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"244.652446ms","start":"2024-06-03T11:27:44.92206Z","end":"2024-06-03T11:27:45.166713Z","steps":["trace[1628375869] 'process raft request'  (duration: 244.442804ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:30:33.374409Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-03T11:30:33.374689Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-505550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.232:2380"],"advertise-client-urls":["https://192.168.39.232:2379"]}
	{"level":"warn","ts":"2024-06-03T11:30:33.374846Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.37503Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.469012Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.232:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.469287Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.232:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:30:33.469401Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"457e62b9766c4f6a","current-leader-member-id":"457e62b9766c4f6a"}
	{"level":"info","ts":"2024-06-03T11:30:33.471855Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:30:33.471999Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:30:33.472034Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-505550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.232:2380"],"advertise-client-urls":["https://192.168.39.232:2379"]}
	
	
	==> kernel <==
	 11:33:40 up 8 min,  0 users,  load average: 0.45, 0.36, 0.19
	Linux multinode-505550 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368] <==
	I0603 11:29:51.486417       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:01.493653       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:01.493694       1 main.go:227] handling current node
	I0603 11:30:01.493708       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:01.493714       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:01.493825       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:01.493851       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:11.507635       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:11.507755       1 main.go:227] handling current node
	I0603 11:30:11.507840       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:11.507847       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:11.508093       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:11.508121       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:21.522146       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:21.522197       1 main.go:227] handling current node
	I0603 11:30:21.522209       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:21.522213       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:21.522353       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:21.522379       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:31.534521       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:31.534847       1 main.go:227] handling current node
	I0603 11:30:31.534894       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:31.534925       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:31.535139       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:31.535171       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a] <==
	I0603 11:32:51.508759       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:33:01.513181       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:33:01.513273       1 main.go:227] handling current node
	I0603 11:33:01.513299       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:33:01.513328       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:33:01.513444       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:33:01.513465       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:33:11.521074       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:33:11.521195       1 main.go:227] handling current node
	I0603 11:33:11.521226       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:33:11.521244       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:33:11.521377       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:33:11.521401       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:33:21.573962       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:33:21.574040       1 main.go:227] handling current node
	I0603 11:33:21.574063       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:33:21.574079       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:33:21.574186       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:33:21.574205       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:33:31.587675       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:33:31.587964       1 main.go:227] handling current node
	I0603 11:33:31.588070       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:33:31.588099       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:33:31.588332       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:33:31.588391       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40] <==
	W0603 11:30:33.405371       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405423       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405469       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405516       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405773       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405844       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405888       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405931       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405978       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406017       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406058       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406121       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.408153       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.408238       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.413525       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414409       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414798       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414867       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414902       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414937       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414964       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415000       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415028       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415055       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415230       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12] <==
	I0603 11:32:19.844624       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:32:19.847631       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:32:19.850467       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:32:19.850514       1 policy_source.go:224] refreshing policies
	I0603 11:32:19.865371       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:32:19.865526       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 11:32:19.870169       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 11:32:19.870300       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:32:19.870327       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:32:19.881856       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 11:32:19.882257       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:32:19.882325       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:32:19.882351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:32:19.882434       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:32:19.888720       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:32:19.909409       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0603 11:32:19.979200       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 11:32:20.749299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 11:32:21.835527       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 11:32:21.978282       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 11:32:21.990304       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 11:32:22.067151       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 11:32:22.076288       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 11:32:32.771268       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 11:32:32.825884       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff] <==
	I0603 11:32:33.414022       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:32:33.431375       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:32:33.431441       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 11:32:55.267991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.929733ms"
	I0603 11:32:55.282525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.472597ms"
	I0603 11:32:55.282695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.078µs"
	I0603 11:32:59.747673       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m02\" does not exist"
	I0603 11:32:59.757335       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m02" podCIDRs=["10.244.1.0/24"]
	I0603 11:33:01.643408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.791µs"
	I0603 11:33:01.654391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.749µs"
	I0603 11:33:01.665978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.636µs"
	I0603 11:33:01.710615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.037µs"
	I0603 11:33:01.718761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.736µs"
	I0603 11:33:01.723900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.328µs"
	I0603 11:33:02.691559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.65µs"
	I0603 11:33:08.753629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:08.771201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.524µs"
	I0603 11:33:08.784907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.257µs"
	I0603 11:33:12.006702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.986072ms"
	I0603 11:33:12.006815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.532µs"
	I0603 11:33:26.973132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:27.963676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:33:27.964213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:27.986517       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:33:37.277149       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	
	
	==> kube-controller-manager [9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9] <==
	I0603 11:26:26.092640       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 11:26:51.905111       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m02\" does not exist"
	I0603 11:26:51.962084       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m02" podCIDRs=["10.244.1.0/24"]
	I0603 11:26:56.096951       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-505550-m02"
	I0603 11:27:01.350160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:27:03.458412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.847302ms"
	I0603 11:27:03.475203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.309737ms"
	I0603 11:27:03.475287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.045µs"
	I0603 11:27:06.733847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.728027ms"
	I0603 11:27:06.734809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.399µs"
	I0603 11:27:08.040351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.383936ms"
	I0603 11:27:08.040439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.863µs"
	I0603 11:27:39.008920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:27:39.012344       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:27:39.058619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:27:41.117050       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-505550-m03"
	I0603 11:27:48.737553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m03"
	I0603 11:28:16.736699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:28:17.803337       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:28:17.803969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:28:17.824617       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.3.0/24"]
	I0603 11:28:27.118694       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:29:06.164934       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m03"
	I0603 11:29:06.217204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.275346ms"
	I0603 11:29:06.218184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.844µs"
	
	
	==> kube-proxy [7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e] <==
	I0603 11:32:20.709424       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:32:20.720743       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.232"]
	I0603 11:32:20.772788       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:32:20.772914       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:32:20.772993       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:32:20.785665       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:32:20.786033       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:32:20.786117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:32:20.789065       1 config.go:192] "Starting service config controller"
	I0603 11:32:20.789110       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:32:20.789149       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:32:20.789168       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:32:20.789912       1 config.go:319] "Starting node config controller"
	I0603 11:32:20.789943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:32:20.890100       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:32:20.890225       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:32:20.890322       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8] <==
	I0603 11:26:17.127305       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:26:17.148936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.232"]
	I0603 11:26:17.267892       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:26:17.268055       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:26:17.268130       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:26:17.272267       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:26:17.272655       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:26:17.272949       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:26:17.276372       1 config.go:192] "Starting service config controller"
	I0603 11:26:17.276486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:26:17.276669       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:26:17.276742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:26:17.278117       1 config.go:319] "Starting node config controller"
	I0603 11:26:17.278219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:26:17.377373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:26:17.377362       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:26:17.378981       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a] <==
	I0603 11:32:17.489771       1 serving.go:380] Generated self-signed cert in-memory
	W0603 11:32:19.848085       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 11:32:19.850650       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:32:19.850788       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 11:32:19.850817       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 11:32:19.907848       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 11:32:19.907930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:32:19.909515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 11:32:19.909794       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 11:32:19.912944       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:32:19.909822       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:32:20.013653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20] <==
	E0603 11:26:00.007551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:26:00.002733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:26:00.007707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:26:00.002878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:26:00.007814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:26:00.003067       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:26:00.007918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:26:00.003164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:00.008029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:00.006917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:26:00.008133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:26:00.820825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:00.820993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:00.950508       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:26:00.950706       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:26:00.994479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 11:26:00.994630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 11:26:01.017014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:26:01.017097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:26:01.116717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:01.116845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:01.157179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:01.157326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 11:26:03.169906       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 11:30:33.385968       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 11:32:16 multinode-505550 kubelet[3082]: E0603 11:32:16.587079    3082 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.232:8443: connect: connection refused" node="multinode-505550"
	Jun 03 11:32:17 multinode-505550 kubelet[3082]: I0603 11:32:17.389290    3082 kubelet_node_status.go:73] "Attempting to register node" node="multinode-505550"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.860557    3082 apiserver.go:52] "Watching apiserver"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.866043    3082 topology_manager.go:215] "Topology Admit Handler" podUID="28236795-201d-4d98-a57f-3ec7dda17017" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ljnxn"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.866169    3082 topology_manager.go:215] "Topology Admit Handler" podUID="261dd21c-29c2-4178-8c07-95f680e12cd1" podNamespace="kube-system" podName="kube-proxy-nsx2s"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.866242    3082 topology_manager.go:215] "Topology Admit Handler" podUID="8009dbea-f826-44c0-87e5-229b6efdfadc" podNamespace="kube-system" podName="kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.866306    3082 topology_manager.go:215] "Topology Admit Handler" podUID="cdb43188-2f13-4ea2-b906-3428f776eeb4" podNamespace="kube-system" podName="storage-provisioner"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.866349    3082 topology_manager.go:215] "Topology Admit Handler" podUID="39d1f4e2-260f-4fd2-9989-c77d0dd21049" podNamespace="default" podName="busybox-fc5497c4f-nrpnb"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.872750    3082 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918052    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8009dbea-f826-44c0-87e5-229b6efdfadc-xtables-lock\") pod \"kindnet-x9tml\" (UID: \"8009dbea-f826-44c0-87e5-229b6efdfadc\") " pod="kube-system/kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918105    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8009dbea-f826-44c0-87e5-229b6efdfadc-lib-modules\") pod \"kindnet-x9tml\" (UID: \"8009dbea-f826-44c0-87e5-229b6efdfadc\") " pod="kube-system/kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918163    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/261dd21c-29c2-4178-8c07-95f680e12cd1-lib-modules\") pod \"kube-proxy-nsx2s\" (UID: \"261dd21c-29c2-4178-8c07-95f680e12cd1\") " pod="kube-system/kube-proxy-nsx2s"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918179    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8009dbea-f826-44c0-87e5-229b6efdfadc-cni-cfg\") pod \"kindnet-x9tml\" (UID: \"8009dbea-f826-44c0-87e5-229b6efdfadc\") " pod="kube-system/kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918193    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cdb43188-2f13-4ea2-b906-3428f776eeb4-tmp\") pod \"storage-provisioner\" (UID: \"cdb43188-2f13-4ea2-b906-3428f776eeb4\") " pod="kube-system/storage-provisioner"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918232    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/261dd21c-29c2-4178-8c07-95f680e12cd1-xtables-lock\") pod \"kube-proxy-nsx2s\" (UID: \"261dd21c-29c2-4178-8c07-95f680e12cd1\") " pod="kube-system/kube-proxy-nsx2s"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.992473    3082 kubelet_node_status.go:112] "Node was previously registered" node="multinode-505550"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.992626    3082 kubelet_node_status.go:76] "Successfully registered node" node="multinode-505550"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.994665    3082 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.995564    3082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 11:32:25 multinode-505550 kubelet[3082]: I0603 11:32:25.262751    3082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 03 11:33:15 multinode-505550 kubelet[3082]: E0603 11:33:15.935462    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:33:39.734455   45205 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19008-7755/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-505550 -n multinode-505550
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-505550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (311.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 stop
E0603 11:35:19.212964   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505550 stop: exit status 82 (2m0.455898866s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-505550-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-505550 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505550 status: exit status 3 (18.641062752s)

                                                
                                                
-- stdout --
	multinode-505550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-505550-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:36:02.867327   45881 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	E0603 11:36:02.867367   45881 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-505550 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-505550 -n multinode-505550
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-505550 logs -n 25: (1.456493257s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550:/home/docker/cp-test_multinode-505550-m02_multinode-505550.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550 sudo cat                                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m02_multinode-505550.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03:/home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550-m03 sudo cat                                   | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp testdata/cp-test.txt                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550:/home/docker/cp-test_multinode-505550-m03_multinode-505550.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550 sudo cat                                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02:/home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550-m02 sudo cat                                   | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-505550 node stop m03                                                          | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	| node    | multinode-505550 node start                                                             | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC | 03 Jun 24 11:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| stop    | -p multinode-505550                                                                     | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| start   | -p multinode-505550                                                                     | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:30 UTC | 03 Jun 24 11:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC |                     |
	| node    | multinode-505550 node delete                                                            | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC | 03 Jun 24 11:33 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-505550 stop                                                                   | multinode-505550 | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:30:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:30:32.461727   44162 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:30:32.461955   44162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:30:32.461964   44162 out.go:304] Setting ErrFile to fd 2...
	I0603 11:30:32.461968   44162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:30:32.462114   44162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:30:32.462613   44162 out.go:298] Setting JSON to false
	I0603 11:30:32.463552   44162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4377,"bootTime":1717409855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:30:32.463606   44162 start.go:139] virtualization: kvm guest
	I0603 11:30:32.465924   44162 out.go:177] * [multinode-505550] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:30:32.467333   44162 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:30:32.467327   44162 notify.go:220] Checking for updates...
	I0603 11:30:32.468759   44162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:30:32.470198   44162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:30:32.471416   44162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:30:32.472545   44162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:30:32.473807   44162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:30:32.475399   44162 config.go:182] Loaded profile config "multinode-505550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:30:32.475515   44162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:30:32.475900   44162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:30:32.475942   44162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:30:32.490765   44162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I0603 11:30:32.491247   44162 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:30:32.491953   44162 main.go:141] libmachine: Using API Version  1
	I0603 11:30:32.492000   44162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:30:32.492299   44162 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:30:32.492488   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.528449   44162 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:30:32.529715   44162 start.go:297] selected driver: kvm2
	I0603 11:30:32.529738   44162 start.go:901] validating driver "kvm2" against &{Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:30:32.529900   44162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:30:32.530255   44162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:30:32.530357   44162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:30:32.544545   44162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:30:32.545222   44162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:30:32.545275   44162 cni.go:84] Creating CNI manager for ""
	I0603 11:30:32.545286   44162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 11:30:32.545343   44162 start.go:340] cluster config:
	{Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:30:32.545468   44162 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:30:32.547219   44162 out.go:177] * Starting "multinode-505550" primary control-plane node in "multinode-505550" cluster
	I0603 11:30:32.548552   44162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:30:32.548587   44162 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 11:30:32.548599   44162 cache.go:56] Caching tarball of preloaded images
	I0603 11:30:32.548704   44162 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:30:32.548718   44162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 11:30:32.548841   44162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/config.json ...
	I0603 11:30:32.549014   44162 start.go:360] acquireMachinesLock for multinode-505550: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:30:32.549053   44162 start.go:364] duration metric: took 21.169µs to acquireMachinesLock for "multinode-505550"
	I0603 11:30:32.549070   44162 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:30:32.549080   44162 fix.go:54] fixHost starting: 
	I0603 11:30:32.549326   44162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:30:32.549359   44162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:30:32.563189   44162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0603 11:30:32.563575   44162 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:30:32.564022   44162 main.go:141] libmachine: Using API Version  1
	I0603 11:30:32.564040   44162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:30:32.564392   44162 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:30:32.564570   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.564716   44162 main.go:141] libmachine: (multinode-505550) Calling .GetState
	I0603 11:30:32.566063   44162 fix.go:112] recreateIfNeeded on multinode-505550: state=Running err=<nil>
	W0603 11:30:32.566080   44162 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:30:32.567913   44162 out.go:177] * Updating the running kvm2 "multinode-505550" VM ...
	I0603 11:30:32.569138   44162 machine.go:94] provisionDockerMachine start ...
	I0603 11:30:32.569155   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:30:32.569373   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.571598   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.572020   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.572061   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.572138   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.572287   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.572452   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.572574   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.572720   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.572943   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.572958   44162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:30:32.676109   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-505550
	
	I0603 11:30:32.676133   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.676340   44162 buildroot.go:166] provisioning hostname "multinode-505550"
	I0603 11:30:32.676364   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.676540   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.678867   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.679193   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.679220   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.679338   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.679491   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.679656   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.679798   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.679941   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.680135   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.680149   44162 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-505550 && echo "multinode-505550" | sudo tee /etc/hostname
	I0603 11:30:32.798924   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-505550
	
	I0603 11:30:32.798950   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.801439   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.801800   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.801828   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.801990   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:32.802190   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.802322   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:32.802444   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:32.802582   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:32.802760   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:32.802777   44162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-505550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-505550/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-505550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:30:32.908591   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:30:32.908625   44162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:30:32.908657   44162 buildroot.go:174] setting up certificates
	I0603 11:30:32.908668   44162 provision.go:84] configureAuth start
	I0603 11:30:32.908680   44162 main.go:141] libmachine: (multinode-505550) Calling .GetMachineName
	I0603 11:30:32.908942   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:30:32.911399   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.911779   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.911807   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.911911   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:32.913912   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.914245   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:32.914272   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:32.914356   44162 provision.go:143] copyHostCerts
	I0603 11:30:32.914386   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:30:32.914433   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:30:32.914449   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:30:32.914518   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:30:32.914613   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:30:32.914637   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:30:32.914647   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:30:32.914684   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:30:32.914745   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:30:32.914768   44162 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:30:32.914778   44162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:30:32.914810   44162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:30:32.914869   44162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.multinode-505550 san=[127.0.0.1 192.168.39.232 localhost minikube multinode-505550]
	I0603 11:30:33.076772   44162 provision.go:177] copyRemoteCerts
	I0603 11:30:33.076836   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:30:33.076866   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:33.079423   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.079711   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:33.079744   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.079909   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:33.080106   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.080253   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:33.080374   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:30:33.161596   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 11:30:33.161667   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:30:33.187247   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 11:30:33.187303   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 11:30:33.211964   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 11:30:33.212024   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:30:33.235945   44162 provision.go:87] duration metric: took 327.266479ms to configureAuth
	I0603 11:30:33.235970   44162 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:30:33.236200   44162 config.go:182] Loaded profile config "multinode-505550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:30:33.236286   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:30:33.238856   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.239207   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:30:33.239240   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:30:33.239367   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:30:33.239526   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.239693   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:30:33.239841   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:30:33.240000   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:30:33.240205   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:30:33.240221   44162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:32:04.030917   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:32:04.030938   44162 machine.go:97] duration metric: took 1m31.461789042s to provisionDockerMachine
	I0603 11:32:04.030948   44162 start.go:293] postStartSetup for "multinode-505550" (driver="kvm2")
	I0603 11:32:04.030957   44162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:32:04.030979   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.031326   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:32:04.031348   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.034334   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.034769   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.034797   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.034914   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.035156   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.035342   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.035476   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.118465   44162 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:32:04.122615   44162 command_runner.go:130] > NAME=Buildroot
	I0603 11:32:04.122632   44162 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 11:32:04.122643   44162 command_runner.go:130] > ID=buildroot
	I0603 11:32:04.122648   44162 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 11:32:04.122653   44162 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 11:32:04.122696   44162 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:32:04.122712   44162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:32:04.122786   44162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:32:04.122878   44162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:32:04.122888   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /etc/ssl/certs/150282.pem
	I0603 11:32:04.122988   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:32:04.132630   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:32:04.155730   44162 start.go:296] duration metric: took 124.772546ms for postStartSetup
	I0603 11:32:04.155758   44162 fix.go:56] duration metric: took 1m31.606678549s for fixHost
	I0603 11:32:04.155778   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.158836   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.159305   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.159331   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.159488   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.159654   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.159816   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.159933   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.160090   44162 main.go:141] libmachine: Using SSH client type: native
	I0603 11:32:04.160252   44162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0603 11:32:04.160263   44162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:32:04.259729   44162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717414324.241010823
	
	I0603 11:32:04.259751   44162 fix.go:216] guest clock: 1717414324.241010823
	I0603 11:32:04.259760   44162 fix.go:229] Guest: 2024-06-03 11:32:04.241010823 +0000 UTC Remote: 2024-06-03 11:32:04.15576097 +0000 UTC m=+91.728185948 (delta=85.249853ms)
	I0603 11:32:04.259784   44162 fix.go:200] guest clock delta is within tolerance: 85.249853ms
	I0603 11:32:04.259791   44162 start.go:83] releasing machines lock for "multinode-505550", held for 1m31.710727419s
	I0603 11:32:04.259811   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.260061   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:32:04.263129   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.263509   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.263530   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.263910   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264435   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264623   44162 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:32:04.264727   44162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:32:04.264773   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.264869   44162 ssh_runner.go:195] Run: cat /version.json
	I0603 11:32:04.264893   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:32:04.267507   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267653   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267914   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.267942   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.267986   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:04.268005   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:04.268070   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.268305   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.268306   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:32:04.268485   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.268489   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:32:04.268673   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.268688   44162 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:32:04.268857   44162 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:32:04.344125   44162 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 11:32:04.344398   44162 ssh_runner.go:195] Run: systemctl --version
	I0603 11:32:04.368059   44162 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 11:32:04.368107   44162 command_runner.go:130] > systemd 252 (252)
	I0603 11:32:04.368124   44162 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 11:32:04.368170   44162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:32:04.524793   44162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 11:32:04.532491   44162 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 11:32:04.532851   44162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:32:04.532936   44162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:32:04.541990   44162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 11:32:04.542013   44162 start.go:494] detecting cgroup driver to use...
	I0603 11:32:04.542073   44162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:32:04.557630   44162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:32:04.571051   44162 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:32:04.571094   44162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:32:04.584150   44162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:32:04.597247   44162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:32:04.740042   44162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:32:04.885107   44162 docker.go:233] disabling docker service ...
	I0603 11:32:04.885190   44162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:32:04.903567   44162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:32:04.916865   44162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:32:05.062613   44162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:32:05.200844   44162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:32:05.215625   44162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:32:05.234463   44162 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0603 11:32:05.235075   44162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 11:32:05.235139   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.245568   44162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:32:05.245616   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.255670   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.265547   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.275561   44162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:32:05.285789   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.295482   44162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.307151   44162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:32:05.317424   44162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:32:05.326629   44162 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 11:32:05.326722   44162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:32:05.335701   44162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:32:05.472073   44162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:32:13.326919   44162 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.854807981s)
	I0603 11:32:13.326949   44162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:32:13.327008   44162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:32:13.332392   44162 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0603 11:32:13.332420   44162 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 11:32:13.332430   44162 command_runner.go:130] > Device: 0,22	Inode: 1357        Links: 1
	I0603 11:32:13.332441   44162 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 11:32:13.332449   44162 command_runner.go:130] > Access: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332457   44162 command_runner.go:130] > Modify: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332464   44162 command_runner.go:130] > Change: 2024-06-03 11:32:13.196776342 +0000
	I0603 11:32:13.332478   44162 command_runner.go:130] >  Birth: -
	I0603 11:32:13.332500   44162 start.go:562] Will wait 60s for crictl version
	I0603 11:32:13.332545   44162 ssh_runner.go:195] Run: which crictl
	I0603 11:32:13.336552   44162 command_runner.go:130] > /usr/bin/crictl
	I0603 11:32:13.336621   44162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:32:13.374349   44162 command_runner.go:130] > Version:  0.1.0
	I0603 11:32:13.374370   44162 command_runner.go:130] > RuntimeName:  cri-o
	I0603 11:32:13.374375   44162 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0603 11:32:13.374380   44162 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 11:32:13.374395   44162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:32:13.374443   44162 ssh_runner.go:195] Run: crio --version
	I0603 11:32:13.402140   44162 command_runner.go:130] > crio version 1.29.1
	I0603 11:32:13.402172   44162 command_runner.go:130] > Version:        1.29.1
	I0603 11:32:13.402177   44162 command_runner.go:130] > GitCommit:      unknown
	I0603 11:32:13.402182   44162 command_runner.go:130] > GitCommitDate:  unknown
	I0603 11:32:13.402185   44162 command_runner.go:130] > GitTreeState:   clean
	I0603 11:32:13.402191   44162 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 11:32:13.402195   44162 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 11:32:13.402199   44162 command_runner.go:130] > Compiler:       gc
	I0603 11:32:13.402203   44162 command_runner.go:130] > Platform:       linux/amd64
	I0603 11:32:13.402207   44162 command_runner.go:130] > Linkmode:       dynamic
	I0603 11:32:13.402211   44162 command_runner.go:130] > BuildTags:      
	I0603 11:32:13.402215   44162 command_runner.go:130] >   containers_image_ostree_stub
	I0603 11:32:13.402219   44162 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 11:32:13.402223   44162 command_runner.go:130] >   btrfs_noversion
	I0603 11:32:13.402226   44162 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 11:32:13.402231   44162 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 11:32:13.402234   44162 command_runner.go:130] >   seccomp
	I0603 11:32:13.402239   44162 command_runner.go:130] > LDFlags:          unknown
	I0603 11:32:13.402246   44162 command_runner.go:130] > SeccompEnabled:   true
	I0603 11:32:13.402250   44162 command_runner.go:130] > AppArmorEnabled:  false
	I0603 11:32:13.403362   44162 ssh_runner.go:195] Run: crio --version
	I0603 11:32:13.432407   44162 command_runner.go:130] > crio version 1.29.1
	I0603 11:32:13.432436   44162 command_runner.go:130] > Version:        1.29.1
	I0603 11:32:13.432445   44162 command_runner.go:130] > GitCommit:      unknown
	I0603 11:32:13.432452   44162 command_runner.go:130] > GitCommitDate:  unknown
	I0603 11:32:13.432458   44162 command_runner.go:130] > GitTreeState:   clean
	I0603 11:32:13.432466   44162 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 11:32:13.432473   44162 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 11:32:13.432478   44162 command_runner.go:130] > Compiler:       gc
	I0603 11:32:13.432485   44162 command_runner.go:130] > Platform:       linux/amd64
	I0603 11:32:13.432491   44162 command_runner.go:130] > Linkmode:       dynamic
	I0603 11:32:13.432498   44162 command_runner.go:130] > BuildTags:      
	I0603 11:32:13.432509   44162 command_runner.go:130] >   containers_image_ostree_stub
	I0603 11:32:13.432515   44162 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 11:32:13.432519   44162 command_runner.go:130] >   btrfs_noversion
	I0603 11:32:13.432523   44162 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 11:32:13.432528   44162 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 11:32:13.432534   44162 command_runner.go:130] >   seccomp
	I0603 11:32:13.432539   44162 command_runner.go:130] > LDFlags:          unknown
	I0603 11:32:13.432545   44162 command_runner.go:130] > SeccompEnabled:   true
	I0603 11:32:13.432549   44162 command_runner.go:130] > AppArmorEnabled:  false
	I0603 11:32:13.434334   44162 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 11:32:13.435565   44162 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:32:13.438155   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:13.438456   44162 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:32:13.438484   44162 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:32:13.438676   44162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:32:13.442811   44162 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0603 11:32:13.443011   44162 kubeadm.go:877] updating cluster {Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:32:13.443188   44162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 11:32:13.443233   44162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:32:13.486914   44162 command_runner.go:130] > {
	I0603 11:32:13.486933   44162 command_runner.go:130] >   "images": [
	I0603 11:32:13.486937   44162 command_runner.go:130] >     {
	I0603 11:32:13.486944   44162 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 11:32:13.486951   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.486957   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 11:32:13.486960   44162 command_runner.go:130] >       ],
	I0603 11:32:13.486964   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.486974   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 11:32:13.486981   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 11:32:13.486984   44162 command_runner.go:130] >       ],
	I0603 11:32:13.486989   44162 command_runner.go:130] >       "size": "65291810",
	I0603 11:32:13.486996   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487003   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487008   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487015   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487019   44162 command_runner.go:130] >     },
	I0603 11:32:13.487024   44162 command_runner.go:130] >     {
	I0603 11:32:13.487030   44162 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 11:32:13.487050   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487055   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 11:32:13.487062   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487067   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487078   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 11:32:13.487085   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 11:32:13.487090   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487094   44162 command_runner.go:130] >       "size": "65908273",
	I0603 11:32:13.487098   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487104   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487111   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487116   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487121   44162 command_runner.go:130] >     },
	I0603 11:32:13.487125   44162 command_runner.go:130] >     {
	I0603 11:32:13.487140   44162 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 11:32:13.487146   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487151   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 11:32:13.487157   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487160   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487168   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 11:32:13.487177   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 11:32:13.487181   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487187   44162 command_runner.go:130] >       "size": "1363676",
	I0603 11:32:13.487191   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487197   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487202   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487208   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487211   44162 command_runner.go:130] >     },
	I0603 11:32:13.487217   44162 command_runner.go:130] >     {
	I0603 11:32:13.487223   44162 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 11:32:13.487229   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487234   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 11:32:13.487240   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487248   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487257   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 11:32:13.487270   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 11:32:13.487273   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487277   44162 command_runner.go:130] >       "size": "31470524",
	I0603 11:32:13.487280   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487284   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487288   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487292   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487295   44162 command_runner.go:130] >     },
	I0603 11:32:13.487298   44162 command_runner.go:130] >     {
	I0603 11:32:13.487311   44162 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 11:32:13.487315   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487320   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 11:32:13.487323   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487327   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487334   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 11:32:13.487346   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 11:32:13.487349   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487353   44162 command_runner.go:130] >       "size": "61245718",
	I0603 11:32:13.487356   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487360   44162 command_runner.go:130] >       "username": "nonroot",
	I0603 11:32:13.487363   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487367   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487370   44162 command_runner.go:130] >     },
	I0603 11:32:13.487373   44162 command_runner.go:130] >     {
	I0603 11:32:13.487379   44162 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 11:32:13.487383   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487387   44162 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 11:32:13.487391   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487395   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487402   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 11:32:13.487411   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 11:32:13.487415   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487419   44162 command_runner.go:130] >       "size": "150779692",
	I0603 11:32:13.487425   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487429   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487433   44162 command_runner.go:130] >       },
	I0603 11:32:13.487436   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487440   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487444   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487447   44162 command_runner.go:130] >     },
	I0603 11:32:13.487450   44162 command_runner.go:130] >     {
	I0603 11:32:13.487456   44162 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 11:32:13.487462   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487467   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 11:32:13.487471   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487475   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487482   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 11:32:13.487491   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 11:32:13.487495   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487499   44162 command_runner.go:130] >       "size": "117601759",
	I0603 11:32:13.487506   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487514   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487520   44162 command_runner.go:130] >       },
	I0603 11:32:13.487523   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487527   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487531   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487536   44162 command_runner.go:130] >     },
	I0603 11:32:13.487540   44162 command_runner.go:130] >     {
	I0603 11:32:13.487545   44162 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 11:32:13.487550   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487555   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 11:32:13.487561   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487565   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487583   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 11:32:13.487594   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 11:32:13.487597   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487600   44162 command_runner.go:130] >       "size": "112170310",
	I0603 11:32:13.487604   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487608   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487612   44162 command_runner.go:130] >       },
	I0603 11:32:13.487615   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487619   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487623   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487626   44162 command_runner.go:130] >     },
	I0603 11:32:13.487629   44162 command_runner.go:130] >     {
	I0603 11:32:13.487635   44162 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 11:32:13.487639   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487643   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 11:32:13.487646   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487649   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487656   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 11:32:13.487663   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 11:32:13.487666   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487670   44162 command_runner.go:130] >       "size": "85933465",
	I0603 11:32:13.487673   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.487676   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487680   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487688   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487692   44162 command_runner.go:130] >     },
	I0603 11:32:13.487695   44162 command_runner.go:130] >     {
	I0603 11:32:13.487700   44162 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 11:32:13.487704   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487709   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 11:32:13.487711   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487715   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487722   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 11:32:13.487728   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 11:32:13.487733   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487737   44162 command_runner.go:130] >       "size": "63026504",
	I0603 11:32:13.487742   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487746   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.487752   44162 command_runner.go:130] >       },
	I0603 11:32:13.487755   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487760   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487765   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.487769   44162 command_runner.go:130] >     },
	I0603 11:32:13.487772   44162 command_runner.go:130] >     {
	I0603 11:32:13.487779   44162 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 11:32:13.487783   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.487787   44162 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 11:32:13.487793   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487797   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.487803   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 11:32:13.487812   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 11:32:13.487815   44162 command_runner.go:130] >       ],
	I0603 11:32:13.487819   44162 command_runner.go:130] >       "size": "750414",
	I0603 11:32:13.487824   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.487828   44162 command_runner.go:130] >         "value": "65535"
	I0603 11:32:13.487834   44162 command_runner.go:130] >       },
	I0603 11:32:13.487838   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.487841   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.487845   44162 command_runner.go:130] >       "pinned": true
	I0603 11:32:13.487850   44162 command_runner.go:130] >     }
	I0603 11:32:13.487860   44162 command_runner.go:130] >   ]
	I0603 11:32:13.487865   44162 command_runner.go:130] > }
	I0603 11:32:13.488390   44162 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:32:13.488407   44162 crio.go:433] Images already preloaded, skipping extraction
	I0603 11:32:13.488448   44162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:32:13.520322   44162 command_runner.go:130] > {
	I0603 11:32:13.520346   44162 command_runner.go:130] >   "images": [
	I0603 11:32:13.520351   44162 command_runner.go:130] >     {
	I0603 11:32:13.520358   44162 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 11:32:13.520364   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520370   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 11:32:13.520374   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520378   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520386   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 11:32:13.520393   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 11:32:13.520399   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520404   44162 command_runner.go:130] >       "size": "65291810",
	I0603 11:32:13.520408   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520414   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520420   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520426   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520430   44162 command_runner.go:130] >     },
	I0603 11:32:13.520433   44162 command_runner.go:130] >     {
	I0603 11:32:13.520439   44162 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 11:32:13.520444   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520449   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 11:32:13.520453   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520456   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520463   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 11:32:13.520471   44162 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 11:32:13.520475   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520480   44162 command_runner.go:130] >       "size": "65908273",
	I0603 11:32:13.520483   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520489   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520495   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520498   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520504   44162 command_runner.go:130] >     },
	I0603 11:32:13.520509   44162 command_runner.go:130] >     {
	I0603 11:32:13.520516   44162 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 11:32:13.520520   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520527   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 11:32:13.520530   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520536   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520544   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 11:32:13.520553   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 11:32:13.520557   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520561   44162 command_runner.go:130] >       "size": "1363676",
	I0603 11:32:13.520565   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520571   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520575   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520580   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520584   44162 command_runner.go:130] >     },
	I0603 11:32:13.520589   44162 command_runner.go:130] >     {
	I0603 11:32:13.520595   44162 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 11:32:13.520601   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520606   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 11:32:13.520612   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520616   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520624   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 11:32:13.520636   44162 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 11:32:13.520642   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520647   44162 command_runner.go:130] >       "size": "31470524",
	I0603 11:32:13.520653   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520657   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520665   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520671   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520675   44162 command_runner.go:130] >     },
	I0603 11:32:13.520681   44162 command_runner.go:130] >     {
	I0603 11:32:13.520686   44162 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 11:32:13.520690   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520695   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 11:32:13.520701   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520708   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520718   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 11:32:13.520725   44162 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 11:32:13.520731   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520735   44162 command_runner.go:130] >       "size": "61245718",
	I0603 11:32:13.520741   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.520745   44162 command_runner.go:130] >       "username": "nonroot",
	I0603 11:32:13.520749   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520753   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520758   44162 command_runner.go:130] >     },
	I0603 11:32:13.520762   44162 command_runner.go:130] >     {
	I0603 11:32:13.520770   44162 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 11:32:13.520774   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520779   44162 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 11:32:13.520782   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520786   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520793   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 11:32:13.520802   44162 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 11:32:13.520806   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520811   44162 command_runner.go:130] >       "size": "150779692",
	I0603 11:32:13.520815   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520820   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520824   44162 command_runner.go:130] >       },
	I0603 11:32:13.520830   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520834   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520838   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520841   44162 command_runner.go:130] >     },
	I0603 11:32:13.520844   44162 command_runner.go:130] >     {
	I0603 11:32:13.520850   44162 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 11:32:13.520855   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520860   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 11:32:13.520866   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520870   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520879   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 11:32:13.520886   44162 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 11:32:13.520892   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520898   44162 command_runner.go:130] >       "size": "117601759",
	I0603 11:32:13.520904   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520908   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520911   44162 command_runner.go:130] >       },
	I0603 11:32:13.520915   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.520919   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.520923   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.520926   44162 command_runner.go:130] >     },
	I0603 11:32:13.520930   44162 command_runner.go:130] >     {
	I0603 11:32:13.520935   44162 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 11:32:13.520942   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.520947   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 11:32:13.520952   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520957   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.520970   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 11:32:13.520980   44162 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 11:32:13.520983   44162 command_runner.go:130] >       ],
	I0603 11:32:13.520988   44162 command_runner.go:130] >       "size": "112170310",
	I0603 11:32:13.520992   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.520995   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.520999   44162 command_runner.go:130] >       },
	I0603 11:32:13.521004   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521008   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521014   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521017   44162 command_runner.go:130] >     },
	I0603 11:32:13.521020   44162 command_runner.go:130] >     {
	I0603 11:32:13.521026   44162 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 11:32:13.521032   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521036   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 11:32:13.521042   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521046   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521056   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 11:32:13.521068   44162 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 11:32:13.521072   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521094   44162 command_runner.go:130] >       "size": "85933465",
	I0603 11:32:13.521103   44162 command_runner.go:130] >       "uid": null,
	I0603 11:32:13.521108   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521112   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521116   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521119   44162 command_runner.go:130] >     },
	I0603 11:32:13.521122   44162 command_runner.go:130] >     {
	I0603 11:32:13.521128   44162 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 11:32:13.521134   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521139   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 11:32:13.521145   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521149   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521156   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 11:32:13.521165   44162 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 11:32:13.521169   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521173   44162 command_runner.go:130] >       "size": "63026504",
	I0603 11:32:13.521176   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.521180   44162 command_runner.go:130] >         "value": "0"
	I0603 11:32:13.521183   44162 command_runner.go:130] >       },
	I0603 11:32:13.521187   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521191   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521195   44162 command_runner.go:130] >       "pinned": false
	I0603 11:32:13.521198   44162 command_runner.go:130] >     },
	I0603 11:32:13.521204   44162 command_runner.go:130] >     {
	I0603 11:32:13.521210   44162 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 11:32:13.521216   44162 command_runner.go:130] >       "repoTags": [
	I0603 11:32:13.521220   44162 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 11:32:13.521226   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521230   44162 command_runner.go:130] >       "repoDigests": [
	I0603 11:32:13.521237   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 11:32:13.521245   44162 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 11:32:13.521249   44162 command_runner.go:130] >       ],
	I0603 11:32:13.521255   44162 command_runner.go:130] >       "size": "750414",
	I0603 11:32:13.521258   44162 command_runner.go:130] >       "uid": {
	I0603 11:32:13.521263   44162 command_runner.go:130] >         "value": "65535"
	I0603 11:32:13.521269   44162 command_runner.go:130] >       },
	I0603 11:32:13.521273   44162 command_runner.go:130] >       "username": "",
	I0603 11:32:13.521276   44162 command_runner.go:130] >       "spec": null,
	I0603 11:32:13.521282   44162 command_runner.go:130] >       "pinned": true
	I0603 11:32:13.521287   44162 command_runner.go:130] >     }
	I0603 11:32:13.521291   44162 command_runner.go:130] >   ]
	I0603 11:32:13.521294   44162 command_runner.go:130] > }
	I0603 11:32:13.521410   44162 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 11:32:13.521419   44162 cache_images.go:84] Images are preloaded, skipping loading
	I0603 11:32:13.521427   44162 kubeadm.go:928] updating node { 192.168.39.232 8443 v1.30.1 crio true true} ...
	I0603 11:32:13.521514   44162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-505550 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:32:13.521571   44162 ssh_runner.go:195] Run: crio config
	I0603 11:32:13.561373   44162 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0603 11:32:13.561398   44162 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0603 11:32:13.561404   44162 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0603 11:32:13.561407   44162 command_runner.go:130] > #
	I0603 11:32:13.561414   44162 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0603 11:32:13.561420   44162 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0603 11:32:13.561430   44162 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0603 11:32:13.561453   44162 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0603 11:32:13.561460   44162 command_runner.go:130] > # reload'.
	I0603 11:32:13.561473   44162 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0603 11:32:13.561488   44162 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0603 11:32:13.561498   44162 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0603 11:32:13.561510   44162 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0603 11:32:13.561517   44162 command_runner.go:130] > [crio]
	I0603 11:32:13.561532   44162 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0603 11:32:13.561537   44162 command_runner.go:130] > # containers images, in this directory.
	I0603 11:32:13.561546   44162 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0603 11:32:13.561561   44162 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0603 11:32:13.561674   44162 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0603 11:32:13.561698   44162 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0603 11:32:13.561899   44162 command_runner.go:130] > # imagestore = ""
	I0603 11:32:13.561915   44162 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0603 11:32:13.561926   44162 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0603 11:32:13.562031   44162 command_runner.go:130] > storage_driver = "overlay"
	I0603 11:32:13.562083   44162 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0603 11:32:13.562100   44162 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0603 11:32:13.562106   44162 command_runner.go:130] > storage_option = [
	I0603 11:32:13.562207   44162 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0603 11:32:13.562284   44162 command_runner.go:130] > ]
	I0603 11:32:13.562299   44162 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0603 11:32:13.562309   44162 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0603 11:32:13.562590   44162 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0603 11:32:13.562605   44162 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0603 11:32:13.562615   44162 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0603 11:32:13.562623   44162 command_runner.go:130] > # always happen on a node reboot
	I0603 11:32:13.562871   44162 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0603 11:32:13.562893   44162 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0603 11:32:13.562904   44162 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0603 11:32:13.562913   44162 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0603 11:32:13.562996   44162 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0603 11:32:13.563015   44162 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0603 11:32:13.563027   44162 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0603 11:32:13.563252   44162 command_runner.go:130] > # internal_wipe = true
	I0603 11:32:13.563270   44162 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0603 11:32:13.563284   44162 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0603 11:32:13.563622   44162 command_runner.go:130] > # internal_repair = false
	I0603 11:32:13.563643   44162 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0603 11:32:13.563655   44162 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0603 11:32:13.563664   44162 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0603 11:32:13.563862   44162 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0603 11:32:13.563879   44162 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0603 11:32:13.563885   44162 command_runner.go:130] > [crio.api]
	I0603 11:32:13.563894   44162 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0603 11:32:13.564109   44162 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0603 11:32:13.564120   44162 command_runner.go:130] > # IP address on which the stream server will listen.
	I0603 11:32:13.564382   44162 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0603 11:32:13.564400   44162 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0603 11:32:13.564410   44162 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0603 11:32:13.564816   44162 command_runner.go:130] > # stream_port = "0"
	I0603 11:32:13.564832   44162 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0603 11:32:13.565097   44162 command_runner.go:130] > # stream_enable_tls = false
	I0603 11:32:13.565111   44162 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0603 11:32:13.565327   44162 command_runner.go:130] > # stream_idle_timeout = ""
	I0603 11:32:13.565344   44162 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0603 11:32:13.565354   44162 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0603 11:32:13.565364   44162 command_runner.go:130] > # minutes.
	I0603 11:32:13.565750   44162 command_runner.go:130] > # stream_tls_cert = ""
	I0603 11:32:13.565772   44162 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0603 11:32:13.565780   44162 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0603 11:32:13.565934   44162 command_runner.go:130] > # stream_tls_key = ""
	I0603 11:32:13.565950   44162 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0603 11:32:13.565960   44162 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0603 11:32:13.565979   44162 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0603 11:32:13.566163   44162 command_runner.go:130] > # stream_tls_ca = ""
	I0603 11:32:13.566181   44162 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 11:32:13.566335   44162 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0603 11:32:13.566353   44162 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 11:32:13.566558   44162 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0603 11:32:13.566574   44162 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0603 11:32:13.566583   44162 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0603 11:32:13.566591   44162 command_runner.go:130] > [crio.runtime]
	I0603 11:32:13.566600   44162 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0603 11:32:13.566612   44162 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0603 11:32:13.566622   44162 command_runner.go:130] > # "nofile=1024:2048"
	I0603 11:32:13.566635   44162 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0603 11:32:13.566787   44162 command_runner.go:130] > # default_ulimits = [
	I0603 11:32:13.567103   44162 command_runner.go:130] > # ]
	I0603 11:32:13.567123   44162 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0603 11:32:13.567372   44162 command_runner.go:130] > # no_pivot = false
	I0603 11:32:13.567388   44162 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0603 11:32:13.567400   44162 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0603 11:32:13.567599   44162 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0603 11:32:13.567613   44162 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0603 11:32:13.567620   44162 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0603 11:32:13.567630   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 11:32:13.567642   44162 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0603 11:32:13.567649   44162 command_runner.go:130] > # Cgroup setting for conmon
	I0603 11:32:13.567659   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0603 11:32:13.567664   44162 command_runner.go:130] > conmon_cgroup = "pod"
	I0603 11:32:13.567670   44162 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0603 11:32:13.567678   44162 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0603 11:32:13.567684   44162 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 11:32:13.567689   44162 command_runner.go:130] > conmon_env = [
	I0603 11:32:13.567756   44162 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 11:32:13.567768   44162 command_runner.go:130] > ]
	I0603 11:32:13.567776   44162 command_runner.go:130] > # Additional environment variables to set for all the
	I0603 11:32:13.567785   44162 command_runner.go:130] > # containers. These are overridden if set in the
	I0603 11:32:13.567797   44162 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0603 11:32:13.567808   44162 command_runner.go:130] > # default_env = [
	I0603 11:32:13.567814   44162 command_runner.go:130] > # ]
	I0603 11:32:13.567824   44162 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0603 11:32:13.567838   44162 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0603 11:32:13.567847   44162 command_runner.go:130] > # selinux = false
	I0603 11:32:13.567857   44162 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0603 11:32:13.567870   44162 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0603 11:32:13.567884   44162 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0603 11:32:13.567891   44162 command_runner.go:130] > # seccomp_profile = ""
	I0603 11:32:13.567903   44162 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0603 11:32:13.567917   44162 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0603 11:32:13.567930   44162 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0603 11:32:13.567941   44162 command_runner.go:130] > # which might increase security.
	I0603 11:32:13.567949   44162 command_runner.go:130] > # This option is currently deprecated,
	I0603 11:32:13.567961   44162 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0603 11:32:13.567971   44162 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0603 11:32:13.567994   44162 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0603 11:32:13.568003   44162 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0603 11:32:13.568011   44162 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0603 11:32:13.568017   44162 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0603 11:32:13.568029   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.568035   44162 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0603 11:32:13.568040   44162 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0603 11:32:13.568051   44162 command_runner.go:130] > # the cgroup blockio controller.
	I0603 11:32:13.568059   44162 command_runner.go:130] > # blockio_config_file = ""
	I0603 11:32:13.568068   44162 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0603 11:32:13.568072   44162 command_runner.go:130] > # blockio parameters.
	I0603 11:32:13.568076   44162 command_runner.go:130] > # blockio_reload = false
	I0603 11:32:13.568082   44162 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0603 11:32:13.568089   44162 command_runner.go:130] > # irqbalance daemon.
	I0603 11:32:13.568095   44162 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0603 11:32:13.568103   44162 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0603 11:32:13.568111   44162 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0603 11:32:13.568120   44162 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0603 11:32:13.568126   44162 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0603 11:32:13.568134   44162 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0603 11:32:13.568142   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.568146   44162 command_runner.go:130] > # rdt_config_file = ""
	I0603 11:32:13.568153   44162 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0603 11:32:13.568157   44162 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0603 11:32:13.568185   44162 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0603 11:32:13.568192   44162 command_runner.go:130] > # separate_pull_cgroup = ""
	I0603 11:32:13.568198   44162 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0603 11:32:13.568204   44162 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0603 11:32:13.568212   44162 command_runner.go:130] > # will be added.
	I0603 11:32:13.568219   44162 command_runner.go:130] > # default_capabilities = [
	I0603 11:32:13.568248   44162 command_runner.go:130] > # 	"CHOWN",
	I0603 11:32:13.568259   44162 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0603 11:32:13.568266   44162 command_runner.go:130] > # 	"FSETID",
	I0603 11:32:13.568274   44162 command_runner.go:130] > # 	"FOWNER",
	I0603 11:32:13.568281   44162 command_runner.go:130] > # 	"SETGID",
	I0603 11:32:13.568292   44162 command_runner.go:130] > # 	"SETUID",
	I0603 11:32:13.568314   44162 command_runner.go:130] > # 	"SETPCAP",
	I0603 11:32:13.568324   44162 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0603 11:32:13.568331   44162 command_runner.go:130] > # 	"KILL",
	I0603 11:32:13.568337   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568344   44162 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0603 11:32:13.568357   44162 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0603 11:32:13.568376   44162 command_runner.go:130] > # add_inheritable_capabilities = false
	I0603 11:32:13.568387   44162 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0603 11:32:13.568392   44162 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 11:32:13.568399   44162 command_runner.go:130] > default_sysctls = [
	I0603 11:32:13.568404   44162 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0603 11:32:13.568410   44162 command_runner.go:130] > ]
	I0603 11:32:13.568414   44162 command_runner.go:130] > # List of devices on the host that a
	I0603 11:32:13.568422   44162 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0603 11:32:13.568429   44162 command_runner.go:130] > # allowed_devices = [
	I0603 11:32:13.568433   44162 command_runner.go:130] > # 	"/dev/fuse",
	I0603 11:32:13.568439   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568443   44162 command_runner.go:130] > # List of additional devices. specified as
	I0603 11:32:13.568452   44162 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0603 11:32:13.568460   44162 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0603 11:32:13.568465   44162 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 11:32:13.568471   44162 command_runner.go:130] > # additional_devices = [
	I0603 11:32:13.568475   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568480   44162 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0603 11:32:13.568487   44162 command_runner.go:130] > # cdi_spec_dirs = [
	I0603 11:32:13.568594   44162 command_runner.go:130] > # 	"/etc/cdi",
	I0603 11:32:13.568610   44162 command_runner.go:130] > # 	"/var/run/cdi",
	I0603 11:32:13.568615   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568627   44162 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0603 11:32:13.568647   44162 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0603 11:32:13.568657   44162 command_runner.go:130] > # Defaults to false.
	I0603 11:32:13.568669   44162 command_runner.go:130] > # device_ownership_from_security_context = false
	I0603 11:32:13.568682   44162 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0603 11:32:13.568695   44162 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0603 11:32:13.568704   44162 command_runner.go:130] > # hooks_dir = [
	I0603 11:32:13.568713   44162 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0603 11:32:13.568721   44162 command_runner.go:130] > # ]
	I0603 11:32:13.568731   44162 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0603 11:32:13.568744   44162 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0603 11:32:13.568756   44162 command_runner.go:130] > # its default mounts from the following two files:
	I0603 11:32:13.568762   44162 command_runner.go:130] > #
	I0603 11:32:13.568772   44162 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0603 11:32:13.568795   44162 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0603 11:32:13.568808   44162 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0603 11:32:13.568815   44162 command_runner.go:130] > #
	I0603 11:32:13.568827   44162 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0603 11:32:13.568841   44162 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0603 11:32:13.568855   44162 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0603 11:32:13.568866   44162 command_runner.go:130] > #      only add mounts it finds in this file.
	I0603 11:32:13.568875   44162 command_runner.go:130] > #
	I0603 11:32:13.568882   44162 command_runner.go:130] > # default_mounts_file = ""
	I0603 11:32:13.568895   44162 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0603 11:32:13.568908   44162 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0603 11:32:13.568917   44162 command_runner.go:130] > pids_limit = 1024
	I0603 11:32:13.568927   44162 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0603 11:32:13.568941   44162 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0603 11:32:13.568954   44162 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0603 11:32:13.568971   44162 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0603 11:32:13.568981   44162 command_runner.go:130] > # log_size_max = -1
	I0603 11:32:13.568992   44162 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0603 11:32:13.569004   44162 command_runner.go:130] > # log_to_journald = false
	I0603 11:32:13.569014   44162 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0603 11:32:13.569022   44162 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0603 11:32:13.569030   44162 command_runner.go:130] > # Path to directory for container attach sockets.
	I0603 11:32:13.569041   44162 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0603 11:32:13.569050   44162 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0603 11:32:13.569062   44162 command_runner.go:130] > # bind_mount_prefix = ""
	I0603 11:32:13.569078   44162 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0603 11:32:13.569089   44162 command_runner.go:130] > # read_only = false
	I0603 11:32:13.569099   44162 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0603 11:32:13.569111   44162 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0603 11:32:13.569118   44162 command_runner.go:130] > # live configuration reload.
	I0603 11:32:13.569128   44162 command_runner.go:130] > # log_level = "info"
	I0603 11:32:13.569138   44162 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0603 11:32:13.569150   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.569159   44162 command_runner.go:130] > # log_filter = ""
	I0603 11:32:13.569169   44162 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0603 11:32:13.569182   44162 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0603 11:32:13.569201   44162 command_runner.go:130] > # separated by comma.
	I0603 11:32:13.569219   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569228   44162 command_runner.go:130] > # uid_mappings = ""
	I0603 11:32:13.569240   44162 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0603 11:32:13.569253   44162 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0603 11:32:13.569262   44162 command_runner.go:130] > # separated by comma.
	I0603 11:32:13.569275   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569285   44162 command_runner.go:130] > # gid_mappings = ""
	I0603 11:32:13.569296   44162 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0603 11:32:13.569309   44162 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 11:32:13.569319   44162 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 11:32:13.569333   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569343   44162 command_runner.go:130] > # minimum_mappable_uid = -1
	I0603 11:32:13.569353   44162 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0603 11:32:13.569365   44162 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 11:32:13.569374   44162 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 11:32:13.569389   44162 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 11:32:13.569400   44162 command_runner.go:130] > # minimum_mappable_gid = -1
	I0603 11:32:13.569410   44162 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0603 11:32:13.569423   44162 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0603 11:32:13.569436   44162 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0603 11:32:13.569459   44162 command_runner.go:130] > # ctr_stop_timeout = 30
	I0603 11:32:13.569475   44162 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0603 11:32:13.569487   44162 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0603 11:32:13.569498   44162 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0603 11:32:13.569508   44162 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0603 11:32:13.569514   44162 command_runner.go:130] > drop_infra_ctr = false
	I0603 11:32:13.569524   44162 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0603 11:32:13.569536   44162 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0603 11:32:13.569549   44162 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0603 11:32:13.569559   44162 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0603 11:32:13.569576   44162 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0603 11:32:13.569589   44162 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0603 11:32:13.569602   44162 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0603 11:32:13.569614   44162 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0603 11:32:13.569624   44162 command_runner.go:130] > # shared_cpuset = ""
	I0603 11:32:13.569643   44162 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0603 11:32:13.569657   44162 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0603 11:32:13.569664   44162 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0603 11:32:13.569677   44162 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0603 11:32:13.569687   44162 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0603 11:32:13.569699   44162 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0603 11:32:13.569712   44162 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0603 11:32:13.569722   44162 command_runner.go:130] > # enable_criu_support = false
	I0603 11:32:13.569730   44162 command_runner.go:130] > # Enable/disable the generation of the container,
	I0603 11:32:13.569741   44162 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0603 11:32:13.569752   44162 command_runner.go:130] > # enable_pod_events = false
	I0603 11:32:13.569766   44162 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 11:32:13.569780   44162 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 11:32:13.569792   44162 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0603 11:32:13.569803   44162 command_runner.go:130] > # default_runtime = "runc"
	I0603 11:32:13.569815   44162 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0603 11:32:13.569830   44162 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0603 11:32:13.569847   44162 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0603 11:32:13.569857   44162 command_runner.go:130] > # creation as a file is not desired either.
	I0603 11:32:13.569878   44162 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0603 11:32:13.569890   44162 command_runner.go:130] > # the hostname is being managed dynamically.
	I0603 11:32:13.569900   44162 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0603 11:32:13.569907   44162 command_runner.go:130] > # ]
	I0603 11:32:13.569917   44162 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0603 11:32:13.569929   44162 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0603 11:32:13.569939   44162 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0603 11:32:13.569951   44162 command_runner.go:130] > # Each entry in the table should follow the format:
	I0603 11:32:13.569960   44162 command_runner.go:130] > #
	I0603 11:32:13.569971   44162 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0603 11:32:13.569980   44162 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0603 11:32:13.570042   44162 command_runner.go:130] > # runtime_type = "oci"
	I0603 11:32:13.570056   44162 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0603 11:32:13.570064   44162 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0603 11:32:13.570079   44162 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0603 11:32:13.570090   44162 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0603 11:32:13.570096   44162 command_runner.go:130] > # monitor_env = []
	I0603 11:32:13.570115   44162 command_runner.go:130] > # privileged_without_host_devices = false
	I0603 11:32:13.570125   44162 command_runner.go:130] > # allowed_annotations = []
	I0603 11:32:13.570134   44162 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0603 11:32:13.570143   44162 command_runner.go:130] > # Where:
	I0603 11:32:13.570151   44162 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0603 11:32:13.570162   44162 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0603 11:32:13.570171   44162 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0603 11:32:13.570186   44162 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0603 11:32:13.570196   44162 command_runner.go:130] > #   in $PATH.
	I0603 11:32:13.570208   44162 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0603 11:32:13.570219   44162 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0603 11:32:13.570231   44162 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0603 11:32:13.570239   44162 command_runner.go:130] > #   state.
	I0603 11:32:13.570252   44162 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0603 11:32:13.570264   44162 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0603 11:32:13.570276   44162 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0603 11:32:13.570288   44162 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0603 11:32:13.570301   44162 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0603 11:32:13.570315   44162 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0603 11:32:13.570325   44162 command_runner.go:130] > #   The currently recognized values are:
	I0603 11:32:13.570335   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0603 11:32:13.570348   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0603 11:32:13.570357   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0603 11:32:13.570370   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0603 11:32:13.570384   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0603 11:32:13.570397   44162 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0603 11:32:13.570409   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0603 11:32:13.570422   44162 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0603 11:32:13.570451   44162 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0603 11:32:13.570468   44162 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0603 11:32:13.570479   44162 command_runner.go:130] > #   deprecated option "conmon".
	I0603 11:32:13.570493   44162 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0603 11:32:13.570506   44162 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0603 11:32:13.570519   44162 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0603 11:32:13.570531   44162 command_runner.go:130] > #   should be moved to the container's cgroup
	I0603 11:32:13.570544   44162 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0603 11:32:13.570563   44162 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0603 11:32:13.570577   44162 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0603 11:32:13.570589   44162 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0603 11:32:13.570597   44162 command_runner.go:130] > #
	I0603 11:32:13.570605   44162 command_runner.go:130] > # Using the seccomp notifier feature:
	I0603 11:32:13.570613   44162 command_runner.go:130] > #
	I0603 11:32:13.570624   44162 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0603 11:32:13.570637   44162 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0603 11:32:13.570645   44162 command_runner.go:130] > #
	I0603 11:32:13.570658   44162 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0603 11:32:13.570671   44162 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0603 11:32:13.570678   44162 command_runner.go:130] > #
	I0603 11:32:13.570688   44162 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0603 11:32:13.570696   44162 command_runner.go:130] > # feature.
	I0603 11:32:13.570705   44162 command_runner.go:130] > #
	I0603 11:32:13.570718   44162 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0603 11:32:13.570731   44162 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0603 11:32:13.570744   44162 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0603 11:32:13.570757   44162 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0603 11:32:13.570770   44162 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0603 11:32:13.570778   44162 command_runner.go:130] > #
	I0603 11:32:13.570790   44162 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0603 11:32:13.570803   44162 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0603 11:32:13.570811   44162 command_runner.go:130] > #
	I0603 11:32:13.570822   44162 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0603 11:32:13.570834   44162 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0603 11:32:13.570841   44162 command_runner.go:130] > #
	I0603 11:32:13.570853   44162 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0603 11:32:13.570862   44162 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0603 11:32:13.570871   44162 command_runner.go:130] > # limitation.
	I0603 11:32:13.570880   44162 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0603 11:32:13.570888   44162 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0603 11:32:13.570897   44162 command_runner.go:130] > runtime_type = "oci"
	I0603 11:32:13.570906   44162 command_runner.go:130] > runtime_root = "/run/runc"
	I0603 11:32:13.570916   44162 command_runner.go:130] > runtime_config_path = ""
	I0603 11:32:13.570926   44162 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0603 11:32:13.570944   44162 command_runner.go:130] > monitor_cgroup = "pod"
	I0603 11:32:13.570954   44162 command_runner.go:130] > monitor_exec_cgroup = ""
	I0603 11:32:13.570964   44162 command_runner.go:130] > monitor_env = [
	I0603 11:32:13.570976   44162 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 11:32:13.570984   44162 command_runner.go:130] > ]
	I0603 11:32:13.570994   44162 command_runner.go:130] > privileged_without_host_devices = false
	I0603 11:32:13.571008   44162 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0603 11:32:13.571020   44162 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0603 11:32:13.571033   44162 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0603 11:32:13.571061   44162 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0603 11:32:13.571083   44162 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0603 11:32:13.571095   44162 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0603 11:32:13.571111   44162 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0603 11:32:13.571124   44162 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0603 11:32:13.571135   44162 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0603 11:32:13.571148   44162 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0603 11:32:13.571156   44162 command_runner.go:130] > # Example:
	I0603 11:32:13.571163   44162 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0603 11:32:13.571170   44162 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0603 11:32:13.571176   44162 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0603 11:32:13.571183   44162 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0603 11:32:13.571188   44162 command_runner.go:130] > # cpuset = 0
	I0603 11:32:13.571193   44162 command_runner.go:130] > # cpushares = "0-1"
	I0603 11:32:13.571198   44162 command_runner.go:130] > # Where:
	I0603 11:32:13.571204   44162 command_runner.go:130] > # The workload name is workload-type.
	I0603 11:32:13.571214   44162 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0603 11:32:13.571221   44162 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0603 11:32:13.571230   44162 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0603 11:32:13.571241   44162 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0603 11:32:13.571249   44162 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0603 11:32:13.571256   44162 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0603 11:32:13.571266   44162 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0603 11:32:13.571273   44162 command_runner.go:130] > # Default value is set to true
	I0603 11:32:13.571279   44162 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0603 11:32:13.571288   44162 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0603 11:32:13.571295   44162 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0603 11:32:13.571309   44162 command_runner.go:130] > # Default value is set to 'false'
	I0603 11:32:13.571315   44162 command_runner.go:130] > # disable_hostport_mapping = false
	I0603 11:32:13.571325   44162 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0603 11:32:13.571330   44162 command_runner.go:130] > #
	I0603 11:32:13.571343   44162 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0603 11:32:13.571354   44162 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0603 11:32:13.571367   44162 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0603 11:32:13.571379   44162 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0603 11:32:13.571391   44162 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0603 11:32:13.571401   44162 command_runner.go:130] > [crio.image]
	I0603 11:32:13.571416   44162 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0603 11:32:13.571426   44162 command_runner.go:130] > # default_transport = "docker://"
	I0603 11:32:13.571439   44162 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0603 11:32:13.571452   44162 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0603 11:32:13.571460   44162 command_runner.go:130] > # global_auth_file = ""
	I0603 11:32:13.571469   44162 command_runner.go:130] > # The image used to instantiate infra containers.
	I0603 11:32:13.571478   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.571488   44162 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0603 11:32:13.571505   44162 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0603 11:32:13.571518   44162 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0603 11:32:13.571528   44162 command_runner.go:130] > # This option supports live configuration reload.
	I0603 11:32:13.571538   44162 command_runner.go:130] > # pause_image_auth_file = ""
	I0603 11:32:13.571551   44162 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0603 11:32:13.571562   44162 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0603 11:32:13.571574   44162 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0603 11:32:13.571584   44162 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0603 11:32:13.571593   44162 command_runner.go:130] > # pause_command = "/pause"
	I0603 11:32:13.571605   44162 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0603 11:32:13.571617   44162 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0603 11:32:13.571630   44162 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0603 11:32:13.571643   44162 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0603 11:32:13.571654   44162 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0603 11:32:13.571665   44162 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0603 11:32:13.571674   44162 command_runner.go:130] > # pinned_images = [
	I0603 11:32:13.571682   44162 command_runner.go:130] > # ]
	I0603 11:32:13.571693   44162 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0603 11:32:13.571714   44162 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0603 11:32:13.571728   44162 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0603 11:32:13.571739   44162 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0603 11:32:13.571749   44162 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0603 11:32:13.571758   44162 command_runner.go:130] > # signature_policy = ""
	I0603 11:32:13.571769   44162 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0603 11:32:13.571781   44162 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0603 11:32:13.571793   44162 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0603 11:32:13.571804   44162 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0603 11:32:13.571815   44162 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0603 11:32:13.571826   44162 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0603 11:32:13.571838   44162 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0603 11:32:13.571850   44162 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0603 11:32:13.571860   44162 command_runner.go:130] > # changing them here.
	I0603 11:32:13.571870   44162 command_runner.go:130] > # insecure_registries = [
	I0603 11:32:13.571878   44162 command_runner.go:130] > # ]
	I0603 11:32:13.571890   44162 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0603 11:32:13.571900   44162 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0603 11:32:13.571910   44162 command_runner.go:130] > # image_volumes = "mkdir"
	I0603 11:32:13.571920   44162 command_runner.go:130] > # Temporary directory to use for storing big files
	I0603 11:32:13.571930   44162 command_runner.go:130] > # big_files_temporary_dir = ""
	I0603 11:32:13.571942   44162 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0603 11:32:13.571951   44162 command_runner.go:130] > # CNI plugins.
	I0603 11:32:13.571959   44162 command_runner.go:130] > [crio.network]
	I0603 11:32:13.571972   44162 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0603 11:32:13.571985   44162 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0603 11:32:13.571993   44162 command_runner.go:130] > # cni_default_network = ""
	I0603 11:32:13.572002   44162 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0603 11:32:13.572009   44162 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0603 11:32:13.572014   44162 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0603 11:32:13.572021   44162 command_runner.go:130] > # plugin_dirs = [
	I0603 11:32:13.572025   44162 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0603 11:32:13.572030   44162 command_runner.go:130] > # ]
	I0603 11:32:13.572036   44162 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0603 11:32:13.572043   44162 command_runner.go:130] > [crio.metrics]
	I0603 11:32:13.572051   44162 command_runner.go:130] > # Globally enable or disable metrics support.
	I0603 11:32:13.572073   44162 command_runner.go:130] > enable_metrics = true
	I0603 11:32:13.572084   44162 command_runner.go:130] > # Specify enabled metrics collectors.
	I0603 11:32:13.572091   44162 command_runner.go:130] > # Per default all metrics are enabled.
	I0603 11:32:13.572105   44162 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0603 11:32:13.572117   44162 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0603 11:32:13.572128   44162 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0603 11:32:13.572137   44162 command_runner.go:130] > # metrics_collectors = [
	I0603 11:32:13.572143   44162 command_runner.go:130] > # 	"operations",
	I0603 11:32:13.572154   44162 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0603 11:32:13.572162   44162 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0603 11:32:13.572170   44162 command_runner.go:130] > # 	"operations_errors",
	I0603 11:32:13.572178   44162 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0603 11:32:13.572187   44162 command_runner.go:130] > # 	"image_pulls_by_name",
	I0603 11:32:13.572196   44162 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0603 11:32:13.572206   44162 command_runner.go:130] > # 	"image_pulls_failures",
	I0603 11:32:13.572215   44162 command_runner.go:130] > # 	"image_pulls_successes",
	I0603 11:32:13.572224   44162 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0603 11:32:13.572233   44162 command_runner.go:130] > # 	"image_layer_reuse",
	I0603 11:32:13.572243   44162 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0603 11:32:13.572249   44162 command_runner.go:130] > # 	"containers_oom_total",
	I0603 11:32:13.572257   44162 command_runner.go:130] > # 	"containers_oom",
	I0603 11:32:13.572261   44162 command_runner.go:130] > # 	"processes_defunct",
	I0603 11:32:13.572267   44162 command_runner.go:130] > # 	"operations_total",
	I0603 11:32:13.572272   44162 command_runner.go:130] > # 	"operations_latency_seconds",
	I0603 11:32:13.572280   44162 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0603 11:32:13.572286   44162 command_runner.go:130] > # 	"operations_errors_total",
	I0603 11:32:13.572291   44162 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0603 11:32:13.572297   44162 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0603 11:32:13.572302   44162 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0603 11:32:13.572309   44162 command_runner.go:130] > # 	"image_pulls_success_total",
	I0603 11:32:13.572313   44162 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0603 11:32:13.572317   44162 command_runner.go:130] > # 	"containers_oom_count_total",
	I0603 11:32:13.572324   44162 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0603 11:32:13.572330   44162 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0603 11:32:13.572334   44162 command_runner.go:130] > # ]
	I0603 11:32:13.572341   44162 command_runner.go:130] > # The port on which the metrics server will listen.
	I0603 11:32:13.572354   44162 command_runner.go:130] > # metrics_port = 9090
	I0603 11:32:13.572365   44162 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0603 11:32:13.572375   44162 command_runner.go:130] > # metrics_socket = ""
	I0603 11:32:13.572385   44162 command_runner.go:130] > # The certificate for the secure metrics server.
	I0603 11:32:13.572398   44162 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0603 11:32:13.572411   44162 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0603 11:32:13.572422   44162 command_runner.go:130] > # certificate on any modification event.
	I0603 11:32:13.572429   44162 command_runner.go:130] > # metrics_cert = ""
	I0603 11:32:13.572440   44162 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0603 11:32:13.572452   44162 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0603 11:32:13.572462   44162 command_runner.go:130] > # metrics_key = ""
	I0603 11:32:13.572473   44162 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0603 11:32:13.572482   44162 command_runner.go:130] > [crio.tracing]
	I0603 11:32:13.572493   44162 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0603 11:32:13.572501   44162 command_runner.go:130] > # enable_tracing = false
	I0603 11:32:13.572506   44162 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0603 11:32:13.572513   44162 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0603 11:32:13.572519   44162 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0603 11:32:13.572526   44162 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0603 11:32:13.572530   44162 command_runner.go:130] > # CRI-O NRI configuration.
	I0603 11:32:13.572537   44162 command_runner.go:130] > [crio.nri]
	I0603 11:32:13.572541   44162 command_runner.go:130] > # Globally enable or disable NRI.
	I0603 11:32:13.572547   44162 command_runner.go:130] > # enable_nri = false
	I0603 11:32:13.572552   44162 command_runner.go:130] > # NRI socket to listen on.
	I0603 11:32:13.572559   44162 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0603 11:32:13.572563   44162 command_runner.go:130] > # NRI plugin directory to use.
	I0603 11:32:13.572570   44162 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0603 11:32:13.572575   44162 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0603 11:32:13.572581   44162 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0603 11:32:13.572587   44162 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0603 11:32:13.572594   44162 command_runner.go:130] > # nri_disable_connections = false
	I0603 11:32:13.572599   44162 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0603 11:32:13.572605   44162 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0603 11:32:13.572611   44162 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0603 11:32:13.572617   44162 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0603 11:32:13.572623   44162 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0603 11:32:13.572634   44162 command_runner.go:130] > [crio.stats]
	I0603 11:32:13.572641   44162 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0603 11:32:13.572647   44162 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0603 11:32:13.572653   44162 command_runner.go:130] > # stats_collection_period = 0
	I0603 11:32:13.572683   44162 command_runner.go:130] ! time="2024-06-03 11:32:13.534399797Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0603 11:32:13.572695   44162 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0603 11:32:13.572786   44162 cni.go:84] Creating CNI manager for ""
	I0603 11:32:13.572793   44162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 11:32:13.572803   44162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:32:13.572824   44162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-505550 NodeName:multinode-505550 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:32:13.572946   44162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-505550"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:32:13.572997   44162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 11:32:13.583135   44162 command_runner.go:130] > kubeadm
	I0603 11:32:13.583147   44162 command_runner.go:130] > kubectl
	I0603 11:32:13.583151   44162 command_runner.go:130] > kubelet
	I0603 11:32:13.583258   44162 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:32:13.583323   44162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 11:32:13.592639   44162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 11:32:13.609004   44162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:32:13.625214   44162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0603 11:32:13.641269   44162 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0603 11:32:13.645167   44162 command_runner.go:130] > 192.168.39.232	control-plane.minikube.internal
	I0603 11:32:13.645231   44162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:32:13.783390   44162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:32:13.798463   44162 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550 for IP: 192.168.39.232
	I0603 11:32:13.798485   44162 certs.go:194] generating shared ca certs ...
	I0603 11:32:13.798498   44162 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:32:13.798682   44162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:32:13.798745   44162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:32:13.798759   44162 certs.go:256] generating profile certs ...
	I0603 11:32:13.798858   44162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/client.key
	I0603 11:32:13.798942   44162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key.5ddf5b8c
	I0603 11:32:13.798990   44162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key
	I0603 11:32:13.799004   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 11:32:13.799023   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 11:32:13.799082   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 11:32:13.799102   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 11:32:13.799116   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 11:32:13.799134   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 11:32:13.799150   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 11:32:13.799166   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 11:32:13.799230   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:32:13.799268   44162 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:32:13.799280   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:32:13.799313   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:32:13.799366   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:32:13.799406   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:32:13.799458   44162 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:32:13.799498   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:13.799518   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem -> /usr/share/ca-certificates/15028.pem
	I0603 11:32:13.799536   44162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> /usr/share/ca-certificates/150282.pem
	I0603 11:32:13.800062   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:32:13.824578   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:32:13.847736   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:32:13.871958   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:32:13.895332   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 11:32:13.918172   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 11:32:13.941077   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:32:13.964480   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/multinode-505550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 11:32:13.988110   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:32:14.011440   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:32:14.034547   44162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:32:14.057428   44162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:32:14.073974   44162 ssh_runner.go:195] Run: openssl version
	I0603 11:32:14.079523   44162 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 11:32:14.079687   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:32:14.090250   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094427   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094646   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.094690   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:32:14.100162   44162 command_runner.go:130] > b5213941
	I0603 11:32:14.100234   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:32:14.109493   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:32:14.119950   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124300   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124357   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.124403   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:32:14.129757   44162 command_runner.go:130] > 51391683
	I0603 11:32:14.129880   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:32:14.138935   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:32:14.149946   44162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154189   44162 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154334   44162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.154377   44162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:32:14.159818   44162 command_runner.go:130] > 3ec20f2e
	I0603 11:32:14.159998   44162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:32:14.169122   44162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:32:14.173390   44162 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:32:14.173409   44162 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 11:32:14.173414   44162 command_runner.go:130] > Device: 253,1	Inode: 8386582     Links: 1
	I0603 11:32:14.173421   44162 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 11:32:14.173427   44162 command_runner.go:130] > Access: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173435   44162 command_runner.go:130] > Modify: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173447   44162 command_runner.go:130] > Change: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173456   44162 command_runner.go:130] >  Birth: 2024-06-03 11:25:53.328128273 +0000
	I0603 11:32:14.173543   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:32:14.178999   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.179057   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:32:14.184371   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.184413   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:32:14.189727   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.189775   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:32:14.194905   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.195085   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:32:14.200370   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.200621   44162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:32:14.205998   44162 command_runner.go:130] > Certificate will not expire
	I0603 11:32:14.206057   44162 kubeadm.go:391] StartCluster: {Name:multinode-505550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-505550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:32:14.206171   44162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:32:14.206233   44162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:32:14.244064   44162 command_runner.go:130] > 3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d
	I0603 11:32:14.244088   44162 command_runner.go:130] > 4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284
	I0603 11:32:14.244094   44162 command_runner.go:130] > 43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368
	I0603 11:32:14.244100   44162 command_runner.go:130] > d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8
	I0603 11:32:14.244106   44162 command_runner.go:130] > e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0
	I0603 11:32:14.244115   44162 command_runner.go:130] > 9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40
	I0603 11:32:14.244125   44162 command_runner.go:130] > 37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20
	I0603 11:32:14.244146   44162 command_runner.go:130] > 9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9
	I0603 11:32:14.244168   44162 cri.go:89] found id: "3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d"
	I0603 11:32:14.244178   44162 cri.go:89] found id: "4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284"
	I0603 11:32:14.244183   44162 cri.go:89] found id: "43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368"
	I0603 11:32:14.244185   44162 cri.go:89] found id: "d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8"
	I0603 11:32:14.244188   44162 cri.go:89] found id: "e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0"
	I0603 11:32:14.244191   44162 cri.go:89] found id: "9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40"
	I0603 11:32:14.244194   44162 cri.go:89] found id: "37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20"
	I0603 11:32:14.244196   44162 cri.go:89] found id: "9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9"
	I0603 11:32:14.244199   44162 cri.go:89] found id: ""
	I0603 11:32:14.244247   44162 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.511368125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd2f100d-f029-4e5b-a79c-744e5c44d0e4 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.512721079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59376c16-f5d9-4cf8-9d40-479caa7caa12 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.513150255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414563513128877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59376c16-f5d9-4cf8-9d40-479caa7caa12 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.514234961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a73e6eb3-11c4-41d3-86ce-7dbbc98df69a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.514320274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a73e6eb3-11c4-41d3-86ce-7dbbc98df69a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.515147659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a73e6eb3-11c4-41d3-86ce-7dbbc98df69a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.557974883Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=db052421-5a99-439e-9beb-add5a5927269 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.558547508Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nrpnb,Uid:39d1f4e2-260f-4fd2-9989-c77d0dd21049,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414373987374461,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:32:19.865746615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ljnxn,Uid:28236795-201d-4d98-a57f-3ec7dda17017,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1717414340250648263,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:32:19.865747664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cdb43188-2f13-4ea2-b906-3428f776eeb4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414340212028776,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T11:32:19.865745223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&PodSandboxMetadata{Name:kube-proxy-nsx2s,Uid:261dd21c-29c2-4178-8c07-95f680e12cd1,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1717414340202797209,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:32:19.865744076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&PodSandboxMetadata{Name:kindnet-x9tml,Uid:8009dbea-f826-44c0-87e5-229b6efdfadc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414340202314664,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:32:19.865748586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&PodSandboxMetadata{Name:etcd-multinode-505550,Uid:3379ca91c8329ad29561c7813158eed3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414336366051091,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.232:2379,kubernetes.io/config.hash: 3379ca91c8329ad29561c7813158eed3,kubernetes.io/config.seen: 2024-06-03T11:32:15.859027862Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metada
ta:&PodSandboxMetadata{Name:kube-apiserver-multinode-505550,Uid:58994f26dfe73bd8f7134c529936f9c5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414336364260299,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.232:8443,kubernetes.io/config.hash: 58994f26dfe73bd8f7134c529936f9c5,kubernetes.io/config.seen: 2024-06-03T11:32:15.859031753Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-505550,Uid:3bc7b935b457720b0098c72b13f32f50,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414336363065020,Labels:map[string]string{comp
onent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3bc7b935b457720b0098c72b13f32f50,kubernetes.io/config.seen: 2024-06-03T11:32:15.859033974Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-505550,Uid:4def9b2659615cee892e7dc3ae4825b3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717414336358502582,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 4def9b2659615cee892e7dc3ae4825b3,kubernetes.io/config.seen: 2024-06-03T11:32:15.859032918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nrpnb,Uid:39d1f4e2-260f-4fd2-9989-c77d0dd21049,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717414024653395517,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:27:03.441031953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ljnxn,Uid:28236795-201d-4d98-a57f-3ec7dda17017,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413981824807087,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:26:21.513710912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cdb43188-2f13-4ea2-b906-3428f776eeb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413981816150915,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T11:26:21.509789776Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&PodSandboxMetadata{Name:kube-proxy-nsx2s,Uid:261dd21c-29c2-4178-8c07-95f680e12cd1,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413976633702984,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:26:16.326470940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&PodSandboxMetadata{Name:kindnet-x9tml,Uid:8009dbea-f826-44c0-87e5-229b6efdfadc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413976575383626,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:26:16.262644386Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-505550,Uid:58994f26dfe73bd8f7134c529936f9c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413957338034444,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.232:8443,kubernetes.io/config.hash: 58994f26dfe73bd8f7134c529936f9c5,kubernetes.io/config.seen: 2024-06-03T11:25:56.866297525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec1b07c24772f2
03fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-505550,Uid:4def9b2659615cee892e7dc3ae4825b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413957325326429,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4def9b2659615cee892e7dc3ae4825b3,kubernetes.io/config.seen: 2024-06-03T11:25:56.866298672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-505550,Uid:3bc7b935b457720b0098c72b13f32f50,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413957315878396,Labels:map[string]string
{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3bc7b935b457720b0098c72b13f32f50,kubernetes.io/config.seen: 2024-06-03T11:25:56.866299721Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-505550,Uid:3379ca91c8329ad29561c7813158eed3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717413957315449401,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.232:2379,kubernetes.io/config.hash: 3379ca91c8329ad29561c7813158eed3,kubernetes.io/config.seen: 2024-06-03T11:25:56.866293550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=db052421-5a99-439e-9beb-add5a5927269 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.559529807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cabfa33d-850f-4184-b0e3-dab269c7e36f name=/runtime.v1.RuntimeService/Version
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.559651160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cabfa33d-850f-4184-b0e3-dab269c7e36f name=/runtime.v1.RuntimeService/Version
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.559920887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd1d474c-8b63-4d13-a882-404432800bc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.559978221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd1d474c-8b63-4d13-a882-404432800bc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.560532088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd1d474c-8b63-4d13-a882-404432800bc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.560643135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ae1ef7-ce5d-4f34-ac45-074297cd9844 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.561781141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414563561760794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ae1ef7-ce5d-4f34-ac45-074297cd9844 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.562459774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6211f307-2d44-496d-b449-7d24b53b5c11 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.562526014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6211f307-2d44-496d-b449-7d24b53b5c11 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.562902884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6211f307-2d44-496d-b449-7d24b53b5c11 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.610754435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f331c3ba-e472-4df4-9676-ef184aed259d name=/runtime.v1.RuntimeService/Version
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.610851297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f331c3ba-e472-4df4-9676-ef184aed259d name=/runtime.v1.RuntimeService/Version
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.614488387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfffb924-b75f-4864-b1c9-df40497da12a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.615169751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717414563615107063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfffb924-b75f-4864-b1c9-df40497da12a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.618827751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=061a245a-b549-47fc-a1c0-5f275ede25a9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.618938340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=061a245a-b549-47fc-a1c0-5f275ede25a9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:36:03 multinode-505550 crio[2864]: time="2024-06-03 11:36:03.619343584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df483073fa3fb785c493fc6afdd2c6f0888dbffc9cbf1dbb06e011bb502c9cab,PodSandboxId:0318de4e55dcd8d686e734d0076108297ae1571ea735347a8c24c6922955eed8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717414374153235949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a,PodSandboxId:2e30d008cc8ccbd818be8736680ac3eccab6b30250e97da63b0ca3670a803e69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717414340631225936,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343,PodSandboxId:99df45d71c3a54824df5fc90b173bf85b2fea7e4dabcb0c2ffb7fb80727681ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717414340490844649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3491249f81e7d1b784343518b31a2d13b59ef1c3d80808a24954de1ad75cb,PodSandboxId:e4acd7567116c1893451e0bf24f2df35f071b5ae59c04be178981941e2f21c62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717414340422133311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},An
notations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e,PodSandboxId:429c3c650d7e535703c47a86f29d1ecbc9e9a79f435a4dd561125b9f80e103b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717414340422169113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a,PodSandboxId:c41a19e00620abf4a74ce01e6a0479dfc236e4c007b05706681acd71813084e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717414336573681079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff,PodSandboxId:18f35e40ddee74760c5ca185d39a0601523d8e5672148d93849b03c01e6af5df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717414336566274323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320,PodSandboxId:db96f61e1e178f691c6c64bffce49b947075b1942cb13b9b10d6d1b9b214f2ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717414336610713783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158eed3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12,PodSandboxId:2790e7ea8fac8eb7804c50a5bb864ddbcc8406921c2b3e7c733ce782ecd46fad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717414336529205306,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5e11f7649665346942da51c6082b8b0e21c85bc22d44be1b62a19136498974,PodSandboxId:9538cd6a41f17007b73b77040899ffc2261108543c3ffb04eb9a4a321981a547,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717414027300894718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nrpnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39d1f4e2-260f-4fd2-9989-c77d0dd21049,},Annotations:map[string]string{io.kubernetes.container.hash: 88effacd,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d,PodSandboxId:36bdd67bb32f9347b543b85ec5a923b5fb4134c0c0d1f98c516273e7908dacb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717413982008712281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljnxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28236795-201d-4d98-a57f-3ec7dda17017,},Annotations:map[string]string{io.kubernetes.container.hash: 5547a5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e706590e463e059ba314f8383faf1ff1548d0370c0f10dabde12a8dd107c284,PodSandboxId:5b00ced87c1743c7fcd6304a070e457c42f0cd216ef38e31fa2179b765434ec7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717413981949435798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cdb43188-2f13-4ea2-b906-3428f776eeb4,},Annotations:map[string]string{io.kubernetes.container.hash: d3389f8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368,PodSandboxId:b1e8e3910984c00a9485c5a70a755e09739d9ea8b73bc5d3f37c687ffba7821d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717413980591164672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x9tml,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8009dbea-f826-44c0-87e5-229b6efdfadc,},Annotations:map[string]string{io.kubernetes.container.hash: c0501522,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8,PodSandboxId:468d52378470e2b2ba8ffe2cb083d299648478407c4d7bb03735309754c26790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717413976936298698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsx2s,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 261dd21c-29c2-4178-8c07-95f680e12cd1,},Annotations:map[string]string{io.kubernetes.container.hash: c3d14b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0,PodSandboxId:4d2cf60baa750bb0f90944666a04886f85c05186e0784976c70e7a4cb2b365c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717413957555829944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3379ca91c8329ad29561c7813158ee
d3,},Annotations:map[string]string{io.kubernetes.container.hash: e34c9fbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40,PodSandboxId:cce119ae28b41c6ce401c01081afe7dccfb05e3ef7de661201666752b7a86005,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717413957551239924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58994f26dfe73bd8f7134c529936f9c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1fc178a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20,PodSandboxId:0ea297b461475a2aafb682da4a17e6c0b4b8dc25cba335537491a76d74504a87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717413957509108832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc7b935b457720b0098c72b13f32f50,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9,PodSandboxId:ec1b07c24772f203fdf3378b46b28e4530edc764397be33bbe3147225551baa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717413957487977482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-505550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4def9b2659615cee892e7dc3ae4825b3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=061a245a-b549-47fc-a1c0-5f275ede25a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df483073fa3fb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   0318de4e55dcd       busybox-fc5497c4f-nrpnb
	cac8e61c82198       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   2e30d008cc8cc       kindnet-x9tml
	00339123e1f21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   99df45d71c3a5       coredns-7db6d8ff4d-ljnxn
	7a7dc7ea2138c       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   429c3c650d7e5       kube-proxy-nsx2s
	36f3491249f81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   e4acd7567116c       storage-provisioner
	ae066b6e74205       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   db96f61e1e178       etcd-multinode-505550
	1aa2017e346a1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   c41a19e00620a       kube-scheduler-multinode-505550
	33e99de01a6dc       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   18f35e40ddee7       kube-controller-manager-multinode-505550
	b65f722b1ce16       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   2790e7ea8fac8       kube-apiserver-multinode-505550
	5f5e11f764966       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   9538cd6a41f17       busybox-fc5497c4f-nrpnb
	3e620850e58c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   36bdd67bb32f9       coredns-7db6d8ff4d-ljnxn
	4e706590e463e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   5b00ced87c174       storage-provisioner
	43e352950fd35       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   b1e8e3910984c       kindnet-x9tml
	d6635384a19f3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   468d52378470e       kube-proxy-nsx2s
	e609ee17b90fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   4d2cf60baa750       etcd-multinode-505550
	9829e23092038       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      10 minutes ago      Exited              kube-apiserver            0                   cce119ae28b41       kube-apiserver-multinode-505550
	37aee72ac00be       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      10 minutes ago      Exited              kube-scheduler            0                   0ea297b461475       kube-scheduler-multinode-505550
	9bc2d863a2009       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      10 minutes ago      Exited              kube-controller-manager   0                   ec1b07c24772f       kube-controller-manager-multinode-505550
	
	
	==> coredns [00339123e1f21e4c4c01ccd77117bb918711c7d5531b771de53ffc77481ca343] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56385 - 33422 "HINFO IN 3413088731930338785.2608471205893518960. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043500879s
	
	
	==> coredns [3e620850e58c82e87316b8c1ff84a833176235ba76dd48543684d19b0982d37d] <==
	[INFO] 10.244.0.3:33000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001908249s
	[INFO] 10.244.0.3:43787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094769s
	[INFO] 10.244.0.3:60770 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006635s
	[INFO] 10.244.0.3:49510 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001470507s
	[INFO] 10.244.0.3:40767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059024s
	[INFO] 10.244.0.3:47550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097486s
	[INFO] 10.244.0.3:60616 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051459s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129029s
	[INFO] 10.244.1.2:47437 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012363s
	[INFO] 10.244.1.2:56690 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097222s
	[INFO] 10.244.1.2:41948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064402s
	[INFO] 10.244.0.3:54434 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115133s
	[INFO] 10.244.0.3:44435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056331s
	[INFO] 10.244.0.3:42535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058351s
	[INFO] 10.244.0.3:47369 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006685s
	[INFO] 10.244.1.2:39250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158419s
	[INFO] 10.244.1.2:41088 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000135208s
	[INFO] 10.244.1.2:41901 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131272s
	[INFO] 10.244.1.2:59936 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074949s
	[INFO] 10.244.0.3:42361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153313s
	[INFO] 10.244.0.3:42372 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047758s
	[INFO] 10.244.0.3:49151 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060567s
	[INFO] 10.244.0.3:33056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000032465s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-505550
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-505550
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-505550
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T11_26_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:26:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-505550
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:36:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:25:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:32:19 +0000   Mon, 03 Jun 2024 11:26:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-505550
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 712f4261d61f4e67b23a2fd880b5e68d
	  System UUID:                712f4261-d61f-4e67-b23a-2fd880b5e68d
	  Boot ID:                    22b55f13-f8d5-4bac-ac0b-f32e25000366
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nrpnb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  kube-system                 coredns-7db6d8ff4d-ljnxn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m48s
	  kube-system                 etcd-multinode-505550                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-x9tml                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m47s
	  kube-system                 kube-apiserver-multinode-505550             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-505550    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-nsx2s                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-scheduler-multinode-505550             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-505550 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-505550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-505550 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m47s                  node-controller  Node multinode-505550 event: Registered Node multinode-505550 in Controller
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-505550 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m48s)  kubelet          Node multinode-505550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m48s)  kubelet          Node multinode-505550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m48s)  kubelet          Node multinode-505550 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-505550 event: Registered Node multinode-505550 in Controller
	
	
	Name:               multinode-505550-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-505550-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-505550
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T11_33_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:32:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-505550-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:33:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:34:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:34:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:34:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:33:30 +0000   Mon, 03 Jun 2024 11:34:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    multinode-505550-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2475f98ffcc4239871aee6fac2e077e
	  System UUID:                a2475f98-ffcc-4239-871a-ee6fac2e077e
	  Boot ID:                    4b5ea6fa-dc35-4675-9b3f-59c29a763ee0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-85kb9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-tgk6j              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m12s
	  kube-system                 kube-proxy-65rk5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m59s                  kube-proxy       
	  Normal  Starting                 9m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m12s (x2 over 9m12s)  kubelet          Node multinode-505550-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s (x2 over 9m12s)  kubelet          Node multinode-505550-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s (x2 over 9m12s)  kubelet          Node multinode-505550-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m2s                   kubelet          Node multinode-505550-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)    kubelet          Node multinode-505550-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)    kubelet          Node multinode-505550-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)    kubelet          Node multinode-505550-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-505550-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-505550-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062039] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.183313] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.110159] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.272008] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.077426] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.988776] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.056935] kauditd_printk_skb: 158 callbacks suppressed
	[Jun 3 11:26] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.085957] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.097188] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.477717] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.195871] kauditd_printk_skb: 57 callbacks suppressed
	[Jun 3 11:27] kauditd_printk_skb: 17 callbacks suppressed
	[Jun 3 11:32] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.149446] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.174460] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.144642] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.270599] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +8.308450] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +0.083136] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.879760] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +4.696325] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.468579] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.228723] systemd-fstab-generator[3890]: Ignoring "noauto" option for root device
	[ +19.051296] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [ae066b6e74205c8a0af0914a8f63f08a78aaa9c743feba1bfc202950fafd0320] <==
	{"level":"info","ts":"2024-06-03T11:32:17.104292Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T11:32:17.08276Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-03T11:32:17.082905Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.107678Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.107712Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:32:17.083326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a switched to configuration voters=(5007548384377851754)"}
	{"level":"info","ts":"2024-06-03T11:32:17.107903Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","added-peer-id":"457e62b9766c4f6a","added-peer-peer-urls":["https://192.168.39.232:2380"]}
	{"level":"info","ts":"2024-06-03T11:32:17.083336Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:32:17.109666Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:32:17.109813Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:32:17.109861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:32:18.490798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.490896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.490959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgPreVoteResp from 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-06-03T11:32:18.491004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.491028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgVoteResp from 457e62b9766c4f6a at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.49106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became leader at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.491096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457e62b9766c4f6a elected leader 457e62b9766c4f6a at term 3"}
	{"level":"info","ts":"2024-06-03T11:32:18.496282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:32:18.496207Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"457e62b9766c4f6a","local-member-attributes":"{Name:multinode-505550 ClientURLs:[https://192.168.39.232:2379]}","request-path":"/0/members/457e62b9766c4f6a/attributes","cluster-id":"6f6de64b207a208a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:32:18.497082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:32:18.497696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:32:18.497734Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:32:18.499468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T11:32:18.501283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	
	
	==> etcd [e609ee17b90fa82d5d04fe16520a0c6782e7dea24d30dbb0e9379f9249c34dd0] <==
	{"level":"info","ts":"2024-06-03T11:25:58.416356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:25:58.416393Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:25:58.433953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:25:58.434009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:25:58.450671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	{"level":"info","ts":"2024-06-03T11:26:51.900068Z","caller":"traceutil/trace.go:171","msg":"trace[1861990371] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"248.858979ms","start":"2024-06-03T11:26:51.651162Z","end":"2024-06-03T11:26:51.900021Z","steps":["trace[1861990371] 'process raft request'  (duration: 248.71955ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:26:51.901706Z","caller":"traceutil/trace.go:171","msg":"trace[1620774965] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"167.377627ms","start":"2024-06-03T11:26:51.734316Z","end":"2024-06-03T11:26:51.901694Z","steps":["trace[1620774965] 'process raft request'  (duration: 167.209836ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:38.99007Z","caller":"traceutil/trace.go:171","msg":"trace[676529011] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"232.734135ms","start":"2024-06-03T11:27:38.75732Z","end":"2024-06-03T11:27:38.990054Z","steps":["trace[676529011] 'process raft request'  (duration: 232.608636ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:38.990517Z","caller":"traceutil/trace.go:171","msg":"trace[1676020132] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:605; }","duration":"170.13301ms","start":"2024-06-03T11:27:38.820358Z","end":"2024-06-03T11:27:38.990491Z","steps":["trace[1676020132] 'read index received'  (duration: 170.128073ms)","trace[1676020132] 'applied index is now lower than readState.Index'  (duration: 4.048µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T11:27:38.990821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.380797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-505550-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T11:27:38.990922Z","caller":"traceutil/trace.go:171","msg":"trace[310359396] range","detail":"{range_begin:/registry/minions/multinode-505550-m03; range_end:; response_count:0; response_revision:567; }","duration":"170.573602ms","start":"2024-06-03T11:27:38.820333Z","end":"2024-06-03T11:27:38.990907Z","steps":["trace[310359396] 'agreement among raft nodes before linearized reading'  (duration: 170.349278ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:39.004672Z","caller":"traceutil/trace.go:171","msg":"trace[171574390] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"172.877263ms","start":"2024-06-03T11:27:38.83178Z","end":"2024-06-03T11:27:39.004657Z","steps":["trace[171574390] 'process raft request'  (duration: 172.273394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T11:27:44.914002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.602437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-xmrf4\" ","response":"range_response_count:1 size:4657"}
	{"level":"info","ts":"2024-06-03T11:27:44.914075Z","caller":"traceutil/trace.go:171","msg":"trace[1736341909] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-xmrf4; range_end:; response_count:1; response_revision:609; }","duration":"105.692204ms","start":"2024-06-03T11:27:44.80836Z","end":"2024-06-03T11:27:44.914053Z","steps":["trace[1736341909] 'range keys from in-memory index tree'  (duration: 105.464536ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:27:45.166729Z","caller":"traceutil/trace.go:171","msg":"trace[1628375869] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"244.652446ms","start":"2024-06-03T11:27:44.92206Z","end":"2024-06-03T11:27:45.166713Z","steps":["trace[1628375869] 'process raft request'  (duration: 244.442804ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:30:33.374409Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-03T11:30:33.374689Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-505550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.232:2380"],"advertise-client-urls":["https://192.168.39.232:2379"]}
	{"level":"warn","ts":"2024-06-03T11:30:33.374846Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.37503Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.469012Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.232:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:30:33.469287Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.232:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:30:33.469401Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"457e62b9766c4f6a","current-leader-member-id":"457e62b9766c4f6a"}
	{"level":"info","ts":"2024-06-03T11:30:33.471855Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:30:33.471999Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.232:2380"}
	{"level":"info","ts":"2024-06-03T11:30:33.472034Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-505550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.232:2380"],"advertise-client-urls":["https://192.168.39.232:2379"]}
	
	
	==> kernel <==
	 11:36:04 up 10 min,  0 users,  load average: 0.16, 0.26, 0.17
	Linux multinode-505550 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [43e352950fd35bb947f3ab7aaf02e79570246ddf2cac8d458867155296100368] <==
	I0603 11:29:51.486417       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:01.493653       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:01.493694       1 main.go:227] handling current node
	I0603 11:30:01.493708       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:01.493714       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:01.493825       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:01.493851       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:11.507635       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:11.507755       1 main.go:227] handling current node
	I0603 11:30:11.507840       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:11.507847       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:11.508093       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:11.508121       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:21.522146       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:21.522197       1 main.go:227] handling current node
	I0603 11:30:21.522209       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:21.522213       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:21.522353       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:21.522379       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	I0603 11:30:31.534521       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:30:31.534847       1 main.go:227] handling current node
	I0603 11:30:31.534894       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:30:31.534925       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:31.535139       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0603 11:30:31.535171       1 main.go:250] Node multinode-505550-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cac8e61c821989854b2f55119cfd9761a0a47f8ea2393d5c18efb4b8ae23279a] <==
	I0603 11:35:01.712334       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:35:11.718457       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:35:11.718561       1 main.go:227] handling current node
	I0603 11:35:11.718673       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:35:11.718719       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:35:21.723976       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:35:21.724013       1 main.go:227] handling current node
	I0603 11:35:21.724024       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:35:21.724029       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:35:31.727967       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:35:31.728044       1 main.go:227] handling current node
	I0603 11:35:31.728056       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:35:31.728061       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:35:41.741467       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:35:41.741661       1 main.go:227] handling current node
	I0603 11:35:41.741708       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:35:41.741728       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:35:51.754436       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:35:51.754522       1 main.go:227] handling current node
	I0603 11:35:51.754551       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:35:51.754632       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	I0603 11:36:01.758695       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0603 11:36:01.758803       1 main.go:227] handling current node
	I0603 11:36:01.758827       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0603 11:36:01.758843       1 main.go:250] Node multinode-505550-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9829e2309203856bfbdd1f4b1b8799484a5e0888c43841f2f409be895f44ac40] <==
	W0603 11:30:33.405371       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405423       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405469       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405516       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405773       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405844       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405888       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405931       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.405978       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406017       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406058       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.406121       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.408153       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.408238       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.413525       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414409       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414798       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414867       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414902       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414937       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.414964       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415000       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415028       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415055       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:30:33.415230       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b65f722b1ce16783ecadc9ea08611a29cb1fbe8ca0ae7bffea150a18f7d41e12] <==
	I0603 11:32:19.844624       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:32:19.847631       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:32:19.850467       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:32:19.850514       1 policy_source.go:224] refreshing policies
	I0603 11:32:19.865371       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:32:19.865526       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 11:32:19.870169       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 11:32:19.870300       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:32:19.870327       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:32:19.881856       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 11:32:19.882257       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:32:19.882325       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:32:19.882351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:32:19.882434       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:32:19.888720       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:32:19.909409       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0603 11:32:19.979200       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 11:32:20.749299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 11:32:21.835527       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 11:32:21.978282       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 11:32:21.990304       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 11:32:22.067151       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 11:32:22.076288       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 11:32:32.771268       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 11:32:32.825884       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [33e99de01a6dc667301bf4e986f05c6cd755b871f915be9f69a980829aa428ff] <==
	I0603 11:32:59.757335       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m02" podCIDRs=["10.244.1.0/24"]
	I0603 11:33:01.643408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.791µs"
	I0603 11:33:01.654391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.749µs"
	I0603 11:33:01.665978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.636µs"
	I0603 11:33:01.710615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.037µs"
	I0603 11:33:01.718761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.736µs"
	I0603 11:33:01.723900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.328µs"
	I0603 11:33:02.691559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.65µs"
	I0603 11:33:08.753629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:08.771201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.524µs"
	I0603 11:33:08.784907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.257µs"
	I0603 11:33:12.006702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.986072ms"
	I0603 11:33:12.006815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.532µs"
	I0603 11:33:26.973132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:27.963676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:33:27.964213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:27.986517       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:33:37.277149       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:33:42.698638       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:34:22.929081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.838431ms"
	I0603 11:34:22.929178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.675µs"
	I0603 11:34:32.774104       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bbh8q"
	I0603 11:34:32.803866       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bbh8q"
	I0603 11:34:32.803948       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-xmrf4"
	I0603 11:34:32.828417       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-xmrf4"
	
	
	==> kube-controller-manager [9bc2d863a2009fba4ad23b3993c51be79fa80cc8da9b5c150ce013d6fd17f6c9] <==
	I0603 11:26:26.092640       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 11:26:51.905111       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m02\" does not exist"
	I0603 11:26:51.962084       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m02" podCIDRs=["10.244.1.0/24"]
	I0603 11:26:56.096951       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-505550-m02"
	I0603 11:27:01.350160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:27:03.458412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.847302ms"
	I0603 11:27:03.475203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.309737ms"
	I0603 11:27:03.475287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.045µs"
	I0603 11:27:06.733847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.728027ms"
	I0603 11:27:06.734809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.399µs"
	I0603 11:27:08.040351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.383936ms"
	I0603 11:27:08.040439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.863µs"
	I0603 11:27:39.008920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:27:39.012344       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:27:39.058619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:27:41.117050       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-505550-m03"
	I0603 11:27:48.737553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m03"
	I0603 11:28:16.736699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:28:17.803337       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:28:17.803969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-505550-m03\" does not exist"
	I0603 11:28:17.824617       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-505550-m03" podCIDRs=["10.244.3.0/24"]
	I0603 11:28:27.118694       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m02"
	I0603 11:29:06.164934       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-505550-m03"
	I0603 11:29:06.217204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.275346ms"
	I0603 11:29:06.218184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.844µs"
	
	
	==> kube-proxy [7a7dc7ea2138c737fb8cb1375c84e7cbe5eda8ccfff2a0abd6c6e6098e38901e] <==
	I0603 11:32:20.709424       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:32:20.720743       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.232"]
	I0603 11:32:20.772788       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:32:20.772914       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:32:20.772993       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:32:20.785665       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:32:20.786033       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:32:20.786117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:32:20.789065       1 config.go:192] "Starting service config controller"
	I0603 11:32:20.789110       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:32:20.789149       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:32:20.789168       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:32:20.789912       1 config.go:319] "Starting node config controller"
	I0603 11:32:20.789943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:32:20.890100       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:32:20.890225       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:32:20.890322       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d6635384a19f3973b8ebdd125fd196355c7f163f405241bcdcb3848c0ae5bfc8] <==
	I0603 11:26:17.127305       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:26:17.148936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.232"]
	I0603 11:26:17.267892       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:26:17.268055       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:26:17.268130       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:26:17.272267       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:26:17.272655       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:26:17.272949       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:26:17.276372       1 config.go:192] "Starting service config controller"
	I0603 11:26:17.276486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:26:17.276669       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:26:17.276742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:26:17.278117       1 config.go:319] "Starting node config controller"
	I0603 11:26:17.278219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:26:17.377373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:26:17.377362       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:26:17.378981       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1aa2017e346a1a9e3efe275c258488513afc245438f371561147ec9432b5222a] <==
	I0603 11:32:17.489771       1 serving.go:380] Generated self-signed cert in-memory
	W0603 11:32:19.848085       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 11:32:19.850650       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:32:19.850788       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 11:32:19.850817       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 11:32:19.907848       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 11:32:19.907930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:32:19.909515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 11:32:19.909794       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 11:32:19.912944       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:32:19.909822       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:32:20.013653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [37aee72ac00be32936d32e337e9e01a378fb4992a9cf7ed31775dcbfa8ef8d20] <==
	E0603 11:26:00.007551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:26:00.002733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:26:00.007707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:26:00.002878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:26:00.007814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 11:26:00.003067       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 11:26:00.007918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 11:26:00.003164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:00.008029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:00.006917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 11:26:00.008133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 11:26:00.820825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:00.820993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:00.950508       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:26:00.950706       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:26:00.994479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 11:26:00.994630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 11:26:01.017014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 11:26:01.017097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 11:26:01.116717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:01.116845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:26:01.157179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 11:26:01.157326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 11:26:03.169906       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 11:30:33.385968       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918105    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8009dbea-f826-44c0-87e5-229b6efdfadc-lib-modules\") pod \"kindnet-x9tml\" (UID: \"8009dbea-f826-44c0-87e5-229b6efdfadc\") " pod="kube-system/kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918163    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/261dd21c-29c2-4178-8c07-95f680e12cd1-lib-modules\") pod \"kube-proxy-nsx2s\" (UID: \"261dd21c-29c2-4178-8c07-95f680e12cd1\") " pod="kube-system/kube-proxy-nsx2s"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918179    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8009dbea-f826-44c0-87e5-229b6efdfadc-cni-cfg\") pod \"kindnet-x9tml\" (UID: \"8009dbea-f826-44c0-87e5-229b6efdfadc\") " pod="kube-system/kindnet-x9tml"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918193    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cdb43188-2f13-4ea2-b906-3428f776eeb4-tmp\") pod \"storage-provisioner\" (UID: \"cdb43188-2f13-4ea2-b906-3428f776eeb4\") " pod="kube-system/storage-provisioner"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.918232    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/261dd21c-29c2-4178-8c07-95f680e12cd1-xtables-lock\") pod \"kube-proxy-nsx2s\" (UID: \"261dd21c-29c2-4178-8c07-95f680e12cd1\") " pod="kube-system/kube-proxy-nsx2s"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.992473    3082 kubelet_node_status.go:112] "Node was previously registered" node="multinode-505550"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.992626    3082 kubelet_node_status.go:76] "Successfully registered node" node="multinode-505550"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.994665    3082 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 11:32:19 multinode-505550 kubelet[3082]: I0603 11:32:19.995564    3082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 11:32:25 multinode-505550 kubelet[3082]: I0603 11:32:25.262751    3082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 03 11:33:15 multinode-505550 kubelet[3082]: E0603 11:33:15.935462    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:33:15 multinode-505550 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:34:15 multinode-505550 kubelet[3082]: E0603 11:34:15.935346    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:34:15 multinode-505550 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:34:15 multinode-505550 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:34:15 multinode-505550 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:34:15 multinode-505550 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:35:15 multinode-505550 kubelet[3082]: E0603 11:35:15.935057    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:35:15 multinode-505550 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:35:15 multinode-505550 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:35:15 multinode-505550 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:35:15 multinode-505550 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 11:36:03.192238   46029 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19008-7755/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-505550 -n multinode-505550
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-505550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.19s)

                                                
                                    
x
+
TestPreload (265.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-206663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0603 11:40:19.213362   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:41:55.089459   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:42:12.037661   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-206663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m1.993797405s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-206663 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-206663 image pull gcr.io/k8s-minikube/busybox: (2.676134766s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-206663
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-206663: (7.276644001s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-206663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-206663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.286794844s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-206663 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-06-03 11:44:39.924884887 +0000 UTC m=+3979.829304762
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-206663 -n test-preload-206663
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-206663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-206663 logs -n 25: (1.024492109s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550 sudo cat                                       | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt                       | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m02:/home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n                                                                 | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | multinode-505550-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-505550 ssh -n multinode-505550-m02 sudo cat                                   | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	|         | /home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-505550 node stop m03                                                          | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:27 UTC | 03 Jun 24 11:27 UTC |
	| node    | multinode-505550 node start                                                             | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC | 03 Jun 24 11:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| stop    | -p multinode-505550                                                                     | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:28 UTC |                     |
	| start   | -p multinode-505550                                                                     | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:30 UTC | 03 Jun 24 11:33 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC |                     |
	| node    | multinode-505550 node delete                                                            | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC | 03 Jun 24 11:33 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-505550 stop                                                                   | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:33 UTC |                     |
	| start   | -p multinode-505550                                                                     | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:36 UTC | 03 Jun 24 11:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-505550                                                                | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:39 UTC |                     |
	| start   | -p multinode-505550-m02                                                                 | multinode-505550-m02 | jenkins | v1.33.1 | 03 Jun 24 11:39 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-505550-m03                                                                 | multinode-505550-m03 | jenkins | v1.33.1 | 03 Jun 24 11:39 UTC | 03 Jun 24 11:40 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-505550                                                                 | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:40 UTC |                     |
	| delete  | -p multinode-505550-m03                                                                 | multinode-505550-m03 | jenkins | v1.33.1 | 03 Jun 24 11:40 UTC | 03 Jun 24 11:40 UTC |
	| delete  | -p multinode-505550                                                                     | multinode-505550     | jenkins | v1.33.1 | 03 Jun 24 11:40 UTC | 03 Jun 24 11:40 UTC |
	| start   | -p test-preload-206663                                                                  | test-preload-206663  | jenkins | v1.33.1 | 03 Jun 24 11:40 UTC | 03 Jun 24 11:43 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-206663 image pull                                                          | test-preload-206663  | jenkins | v1.33.1 | 03 Jun 24 11:43 UTC | 03 Jun 24 11:43 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-206663                                                                  | test-preload-206663  | jenkins | v1.33.1 | 03 Jun 24 11:43 UTC | 03 Jun 24 11:43 UTC |
	| start   | -p test-preload-206663                                                                  | test-preload-206663  | jenkins | v1.33.1 | 03 Jun 24 11:43 UTC | 03 Jun 24 11:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-206663 image list                                                          | test-preload-206663  | jenkins | v1.33.1 | 03 Jun 24 11:44 UTC | 03 Jun 24 11:44 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:43:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:43:28.458625   48771 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:43:28.458862   48771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:43:28.458871   48771 out.go:304] Setting ErrFile to fd 2...
	I0603 11:43:28.458875   48771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:43:28.459085   48771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:43:28.459589   48771 out.go:298] Setting JSON to false
	I0603 11:43:28.460436   48771 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5153,"bootTime":1717409855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:43:28.460491   48771 start.go:139] virtualization: kvm guest
	I0603 11:43:28.462788   48771 out.go:177] * [test-preload-206663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:43:28.464082   48771 notify.go:220] Checking for updates...
	I0603 11:43:28.464087   48771 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:43:28.465336   48771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:43:28.466710   48771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:43:28.467851   48771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:43:28.468976   48771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:43:28.470050   48771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:43:28.471515   48771 config.go:182] Loaded profile config "test-preload-206663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0603 11:43:28.471888   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:43:28.471928   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:43:28.486035   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I0603 11:43:28.486400   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:43:28.486860   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:43:28.486882   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:43:28.487244   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:43:28.487462   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:43:28.489302   48771 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 11:43:28.490651   48771 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:43:28.490917   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:43:28.490956   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:43:28.504722   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0603 11:43:28.505093   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:43:28.505499   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:43:28.505520   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:43:28.505805   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:43:28.505960   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:43:28.538158   48771 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 11:43:28.539351   48771 start.go:297] selected driver: kvm2
	I0603 11:43:28.539368   48771 start.go:901] validating driver "kvm2" against &{Name:test-preload-206663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-206663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:43:28.539452   48771 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:43:28.540097   48771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:43:28.540162   48771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:43:28.554021   48771 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:43:28.554303   48771 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:43:28.554325   48771 cni.go:84] Creating CNI manager for ""
	I0603 11:43:28.554332   48771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:43:28.554396   48771 start.go:340] cluster config:
	{Name:test-preload-206663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-206663 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:43:28.554480   48771 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:43:28.555994   48771 out.go:177] * Starting "test-preload-206663" primary control-plane node in "test-preload-206663" cluster
	I0603 11:43:28.557158   48771 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0603 11:43:28.662336   48771 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0603 11:43:28.662371   48771 cache.go:56] Caching tarball of preloaded images
	I0603 11:43:28.662519   48771 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0603 11:43:28.664141   48771 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0603 11:43:28.665272   48771 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0603 11:43:28.775002   48771 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0603 11:43:41.108445   48771 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0603 11:43:41.108535   48771 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0603 11:43:41.943927   48771 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0603 11:43:41.944058   48771 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/config.json ...
	I0603 11:43:41.944273   48771 start.go:360] acquireMachinesLock for test-preload-206663: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:43:41.944331   48771 start.go:364] duration metric: took 39.215µs to acquireMachinesLock for "test-preload-206663"
	I0603 11:43:41.944346   48771 start.go:96] Skipping create...Using existing machine configuration
	I0603 11:43:41.944351   48771 fix.go:54] fixHost starting: 
	I0603 11:43:41.944629   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:43:41.944659   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:43:41.958857   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I0603 11:43:41.959350   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:43:41.959807   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:43:41.959830   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:43:41.960188   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:43:41.960407   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:43:41.960554   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetState
	I0603 11:43:41.962208   48771 fix.go:112] recreateIfNeeded on test-preload-206663: state=Stopped err=<nil>
	I0603 11:43:41.962229   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	W0603 11:43:41.962388   48771 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 11:43:41.964620   48771 out.go:177] * Restarting existing kvm2 VM for "test-preload-206663" ...
	I0603 11:43:41.965959   48771 main.go:141] libmachine: (test-preload-206663) Calling .Start
	I0603 11:43:41.966155   48771 main.go:141] libmachine: (test-preload-206663) Ensuring networks are active...
	I0603 11:43:41.966811   48771 main.go:141] libmachine: (test-preload-206663) Ensuring network default is active
	I0603 11:43:41.967112   48771 main.go:141] libmachine: (test-preload-206663) Ensuring network mk-test-preload-206663 is active
	I0603 11:43:41.967485   48771 main.go:141] libmachine: (test-preload-206663) Getting domain xml...
	I0603 11:43:41.968075   48771 main.go:141] libmachine: (test-preload-206663) Creating domain...
	I0603 11:43:43.141876   48771 main.go:141] libmachine: (test-preload-206663) Waiting to get IP...
	I0603 11:43:43.142828   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:43.143234   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:43.143298   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:43.143212   48838 retry.go:31] will retry after 190.5104ms: waiting for machine to come up
	I0603 11:43:43.335536   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:43.335954   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:43.335983   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:43.335898   48838 retry.go:31] will retry after 336.591411ms: waiting for machine to come up
	I0603 11:43:43.674412   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:43.674877   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:43.674911   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:43.674838   48838 retry.go:31] will retry after 482.174307ms: waiting for machine to come up
	I0603 11:43:44.158416   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:44.158798   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:44.158831   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:44.158748   48838 retry.go:31] will retry after 562.240607ms: waiting for machine to come up
	I0603 11:43:44.722382   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:44.722789   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:44.722810   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:44.722741   48838 retry.go:31] will retry after 761.932117ms: waiting for machine to come up
	I0603 11:43:45.486610   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:45.486988   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:45.487018   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:45.486936   48838 retry.go:31] will retry after 841.457725ms: waiting for machine to come up
	I0603 11:43:46.329746   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:46.330129   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:46.330156   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:46.330059   48838 retry.go:31] will retry after 921.784616ms: waiting for machine to come up
	I0603 11:43:47.253500   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:47.253905   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:47.253936   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:47.253840   48838 retry.go:31] will retry after 1.298450155s: waiting for machine to come up
	I0603 11:43:48.554110   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:48.554487   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:48.554514   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:48.554449   48838 retry.go:31] will retry after 1.248866426s: waiting for machine to come up
	I0603 11:43:49.804795   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:49.805175   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:49.805198   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:49.805115   48838 retry.go:31] will retry after 1.920555509s: waiting for machine to come up
	I0603 11:43:51.728088   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:51.728562   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:51.728598   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:51.728507   48838 retry.go:31] will retry after 2.746827914s: waiting for machine to come up
	I0603 11:43:54.478832   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:54.479324   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:54.479348   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:54.479276   48838 retry.go:31] will retry after 2.81829907s: waiting for machine to come up
	I0603 11:43:57.299525   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:43:57.299833   48771 main.go:141] libmachine: (test-preload-206663) DBG | unable to find current IP address of domain test-preload-206663 in network mk-test-preload-206663
	I0603 11:43:57.299855   48771 main.go:141] libmachine: (test-preload-206663) DBG | I0603 11:43:57.299792   48838 retry.go:31] will retry after 3.683586199s: waiting for machine to come up
	I0603 11:44:00.987621   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:00.988120   48771 main.go:141] libmachine: (test-preload-206663) Found IP for machine: 192.168.39.137
	I0603 11:44:00.988148   48771 main.go:141] libmachine: (test-preload-206663) Reserving static IP address...
	I0603 11:44:00.988164   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has current primary IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:00.988560   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "test-preload-206663", mac: "52:54:00:27:a3:a5", ip: "192.168.39.137"} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:00.988580   48771 main.go:141] libmachine: (test-preload-206663) DBG | skip adding static IP to network mk-test-preload-206663 - found existing host DHCP lease matching {name: "test-preload-206663", mac: "52:54:00:27:a3:a5", ip: "192.168.39.137"}
	I0603 11:44:00.988590   48771 main.go:141] libmachine: (test-preload-206663) Reserved static IP address: 192.168.39.137
	I0603 11:44:00.988603   48771 main.go:141] libmachine: (test-preload-206663) Waiting for SSH to be available...
	I0603 11:44:00.988618   48771 main.go:141] libmachine: (test-preload-206663) DBG | Getting to WaitForSSH function...
	I0603 11:44:00.990565   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:00.990877   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:00.990909   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:00.990994   48771 main.go:141] libmachine: (test-preload-206663) DBG | Using SSH client type: external
	I0603 11:44:00.991016   48771 main.go:141] libmachine: (test-preload-206663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa (-rw-------)
	I0603 11:44:00.991069   48771 main.go:141] libmachine: (test-preload-206663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 11:44:00.991087   48771 main.go:141] libmachine: (test-preload-206663) DBG | About to run SSH command:
	I0603 11:44:00.991111   48771 main.go:141] libmachine: (test-preload-206663) DBG | exit 0
	I0603 11:44:01.119019   48771 main.go:141] libmachine: (test-preload-206663) DBG | SSH cmd err, output: <nil>: 
	I0603 11:44:01.119367   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetConfigRaw
	I0603 11:44:01.119985   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetIP
	I0603 11:44:01.122280   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.122613   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.122639   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.122903   48771 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/config.json ...
	I0603 11:44:01.123123   48771 machine.go:94] provisionDockerMachine start ...
	I0603 11:44:01.123142   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:01.123332   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.125389   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.125720   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.125749   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.125848   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:01.125988   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.126186   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.126298   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:01.126452   48771 main.go:141] libmachine: Using SSH client type: native
	I0603 11:44:01.126628   48771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0603 11:44:01.126640   48771 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 11:44:01.239213   48771 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 11:44:01.239243   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetMachineName
	I0603 11:44:01.239497   48771 buildroot.go:166] provisioning hostname "test-preload-206663"
	I0603 11:44:01.239523   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetMachineName
	I0603 11:44:01.239729   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.242340   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.242684   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.242718   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.242856   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:01.243218   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.243395   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.243548   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:01.243741   48771 main.go:141] libmachine: Using SSH client type: native
	I0603 11:44:01.243901   48771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0603 11:44:01.243913   48771 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-206663 && echo "test-preload-206663" | sudo tee /etc/hostname
	I0603 11:44:01.368757   48771 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-206663
	
	I0603 11:44:01.368780   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.371700   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.372095   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.372137   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.372297   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:01.372498   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.372631   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.372729   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:01.372855   48771 main.go:141] libmachine: Using SSH client type: native
	I0603 11:44:01.373040   48771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0603 11:44:01.373058   48771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-206663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-206663/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-206663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:44:01.491670   48771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:44:01.491699   48771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:44:01.491716   48771 buildroot.go:174] setting up certificates
	I0603 11:44:01.491727   48771 provision.go:84] configureAuth start
	I0603 11:44:01.491735   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetMachineName
	I0603 11:44:01.491982   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetIP
	I0603 11:44:01.494754   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.495102   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.495129   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.495234   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.497244   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.497549   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.497590   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.497679   48771 provision.go:143] copyHostCerts
	I0603 11:44:01.497754   48771 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:44:01.497773   48771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:44:01.497853   48771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:44:01.497955   48771 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:44:01.497965   48771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:44:01.498004   48771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:44:01.498078   48771 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:44:01.498088   48771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:44:01.498121   48771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:44:01.498194   48771 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.test-preload-206663 san=[127.0.0.1 192.168.39.137 localhost minikube test-preload-206663]
	I0603 11:44:01.738394   48771 provision.go:177] copyRemoteCerts
	I0603 11:44:01.738453   48771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:44:01.738489   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.741105   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.741491   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.741514   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.741663   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:01.741862   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.741998   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:01.742107   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:01.830770   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:44:01.856984   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 11:44:01.882227   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 11:44:01.907247   48771 provision.go:87] duration metric: took 415.510993ms to configureAuth
	I0603 11:44:01.907270   48771 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:44:01.907427   48771 config.go:182] Loaded profile config "test-preload-206663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0603 11:44:01.907486   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:01.910508   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.910909   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:01.910938   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:01.911162   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:01.911365   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.911526   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:01.911684   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:01.911865   48771 main.go:141] libmachine: Using SSH client type: native
	I0603 11:44:01.912061   48771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0603 11:44:01.912076   48771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:44:02.183276   48771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:44:02.183303   48771 machine.go:97] duration metric: took 1.060166859s to provisionDockerMachine
	I0603 11:44:02.183313   48771 start.go:293] postStartSetup for "test-preload-206663" (driver="kvm2")
	I0603 11:44:02.183323   48771 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:44:02.183348   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:02.183632   48771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:44:02.183654   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:02.186125   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.186517   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:02.186546   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.186702   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:02.186885   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:02.187064   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:02.187252   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:02.274012   48771 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:44:02.278087   48771 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:44:02.278111   48771 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:44:02.278180   48771 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:44:02.278253   48771 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:44:02.278334   48771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:44:02.287775   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:44:02.310654   48771 start.go:296] duration metric: took 127.330222ms for postStartSetup
	I0603 11:44:02.310688   48771 fix.go:56] duration metric: took 20.366336303s for fixHost
	I0603 11:44:02.310712   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:02.313180   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.313493   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:02.313521   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.313656   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:02.313853   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:02.314051   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:02.314213   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:02.314337   48771 main.go:141] libmachine: Using SSH client type: native
	I0603 11:44:02.314489   48771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0603 11:44:02.314499   48771 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 11:44:02.427780   48771 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717415042.406011013
	
	I0603 11:44:02.427804   48771 fix.go:216] guest clock: 1717415042.406011013
	I0603 11:44:02.427815   48771 fix.go:229] Guest: 2024-06-03 11:44:02.406011013 +0000 UTC Remote: 2024-06-03 11:44:02.310693052 +0000 UTC m=+33.884789547 (delta=95.317961ms)
	I0603 11:44:02.427837   48771 fix.go:200] guest clock delta is within tolerance: 95.317961ms
	I0603 11:44:02.427844   48771 start.go:83] releasing machines lock for "test-preload-206663", held for 20.483502453s
	I0603 11:44:02.427870   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:02.428146   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetIP
	I0603 11:44:02.430713   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.431061   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:02.431096   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.431253   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:02.431705   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:02.431866   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:02.431951   48771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:44:02.431997   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:02.432081   48771 ssh_runner.go:195] Run: cat /version.json
	I0603 11:44:02.432113   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:02.434602   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.434946   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:02.434968   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.434988   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.435145   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:02.435321   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:02.435481   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:02.435499   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:02.435510   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:02.435627   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:02.435647   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:02.435761   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:02.435901   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:02.436043   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:02.524161   48771 ssh_runner.go:195] Run: systemctl --version
	I0603 11:44:02.541546   48771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:44:02.682820   48771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:44:02.689828   48771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:44:02.689902   48771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:44:02.705695   48771 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 11:44:02.705716   48771 start.go:494] detecting cgroup driver to use...
	I0603 11:44:02.705777   48771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:44:02.720792   48771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:44:02.733726   48771 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:44:02.733773   48771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:44:02.746403   48771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:44:02.759102   48771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:44:02.865360   48771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:44:02.998920   48771 docker.go:233] disabling docker service ...
	I0603 11:44:02.998996   48771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:44:03.013906   48771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:44:03.026232   48771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:44:03.153713   48771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:44:03.274482   48771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:44:03.288003   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:44:03.305823   48771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0603 11:44:03.305873   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.315568   48771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:44:03.315628   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.325939   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.336222   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.346268   48771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:44:03.356390   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.366548   48771 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.382716   48771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:44:03.392847   48771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:44:03.401907   48771 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 11:44:03.401947   48771 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 11:44:03.414158   48771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:44:03.423189   48771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:44:03.533972   48771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:44:03.667163   48771 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:44:03.667253   48771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:44:03.672417   48771 start.go:562] Will wait 60s for crictl version
	I0603 11:44:03.672458   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:03.676236   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:44:03.712518   48771 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:44:03.712603   48771 ssh_runner.go:195] Run: crio --version
	I0603 11:44:03.743413   48771 ssh_runner.go:195] Run: crio --version
	I0603 11:44:03.771597   48771 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0603 11:44:03.772843   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetIP
	I0603 11:44:03.775214   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:03.775621   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:03.775653   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:03.775828   48771 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:44:03.779765   48771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:44:03.792231   48771 kubeadm.go:877] updating cluster {Name:test-preload-206663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-206663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:44:03.792331   48771 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0603 11:44:03.792371   48771 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:44:03.828179   48771 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0603 11:44:03.828236   48771 ssh_runner.go:195] Run: which lz4
	I0603 11:44:03.832101   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 11:44:03.836101   48771 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 11:44:03.836122   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0603 11:44:05.380311   48771 crio.go:462] duration metric: took 1.548230735s to copy over tarball
	I0603 11:44:05.380407   48771 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 11:44:07.652354   48771 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.271908997s)
	I0603 11:44:07.652392   48771 crio.go:469] duration metric: took 2.272051467s to extract the tarball
	I0603 11:44:07.652400   48771 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 11:44:07.692732   48771 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:44:07.737358   48771 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0603 11:44:07.737385   48771 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 11:44:07.737447   48771 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:44:07.737485   48771 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0603 11:44:07.737503   48771 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0603 11:44:07.737526   48771 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0603 11:44:07.737468   48771 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0603 11:44:07.737632   48771 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0603 11:44:07.737654   48771 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0603 11:44:07.737755   48771 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0603 11:44:07.738960   48771 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0603 11:44:07.738979   48771 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0603 11:44:07.739026   48771 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0603 11:44:07.738960   48771 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0603 11:44:07.739029   48771 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0603 11:44:07.738960   48771 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:44:07.739094   48771 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0603 11:44:07.739228   48771 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0603 11:44:07.908829   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0603 11:44:07.914908   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0603 11:44:07.920708   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0603 11:44:07.924607   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0603 11:44:07.935738   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0603 11:44:07.977797   48771 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0603 11:44:07.977835   48771 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0603 11:44:07.977875   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.001371   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0603 11:44:08.007437   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0603 11:44:08.031538   48771 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0603 11:44:08.031584   48771 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0603 11:44:08.031627   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.074079   48771 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0603 11:44:08.074103   48771 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0603 11:44:08.074130   48771 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0603 11:44:08.074138   48771 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0603 11:44:08.074176   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.074176   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.081895   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0603 11:44:08.082120   48771 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0603 11:44:08.082156   48771 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0603 11:44:08.082190   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.104399   48771 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0603 11:44:08.104438   48771 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0603 11:44:08.104484   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.114353   48771 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0603 11:44:08.114393   48771 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0603 11:44:08.114399   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0603 11:44:08.114431   48771 ssh_runner.go:195] Run: which crictl
	I0603 11:44:08.114451   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0603 11:44:08.114528   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0603 11:44:08.142426   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0603 11:44:08.142500   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0603 11:44:08.142542   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0603 11:44:08.142553   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0603 11:44:08.201581   48771 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0603 11:44:08.201827   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0603 11:44:08.201924   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0603 11:44:08.233629   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0603 11:44:08.233690   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0603 11:44:08.233748   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0603 11:44:08.233768   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0603 11:44:08.259821   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0603 11:44:08.259839   48771 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0603 11:44:08.259877   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0603 11:44:08.259913   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0603 11:44:08.259987   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0603 11:44:08.260013   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0603 11:44:08.260076   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0603 11:44:08.287844   48771 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0603 11:44:08.287875   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0603 11:44:08.287938   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0603 11:44:08.287943   48771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0603 11:44:08.287985   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0603 11:44:08.645420   48771 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:44:11.126799   48771 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.866897469s)
	I0603 11:44:11.126835   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0603 11:44:11.126853   48771 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.866823371s)
	I0603 11:44:11.126884   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0603 11:44:11.126859   48771 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0603 11:44:11.126906   48771 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.866810171s)
	I0603 11:44:11.126928   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0603 11:44:11.126939   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0603 11:44:11.126973   48771 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.839009595s)
	I0603 11:44:11.126989   48771 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0603 11:44:11.127031   48771 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.481581268s)
	I0603 11:44:13.277986   48771 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.151023816s)
	I0603 11:44:13.278029   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0603 11:44:13.278051   48771 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0603 11:44:13.278088   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0603 11:44:14.134753   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0603 11:44:14.134799   48771 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0603 11:44:14.134865   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0603 11:44:14.281600   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0603 11:44:14.281654   48771 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0603 11:44:14.281707   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0603 11:44:14.729775   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0603 11:44:14.729829   48771 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0603 11:44:14.729883   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0603 11:44:15.477146   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0603 11:44:15.477199   48771 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0603 11:44:15.477253   48771 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0603 11:44:16.222595   48771 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0603 11:44:16.222642   48771 cache_images.go:123] Successfully loaded all cached images
	I0603 11:44:16.222649   48771 cache_images.go:92] duration metric: took 8.485251462s to LoadCachedImages
	I0603 11:44:16.222678   48771 kubeadm.go:928] updating node { 192.168.39.137 8443 v1.24.4 crio true true} ...
	I0603 11:44:16.222792   48771 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-206663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-206663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:44:16.222853   48771 ssh_runner.go:195] Run: crio config
	I0603 11:44:16.269622   48771 cni.go:84] Creating CNI manager for ""
	I0603 11:44:16.269645   48771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:44:16.269657   48771 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:44:16.269674   48771 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-206663 NodeName:test-preload-206663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 11:44:16.269843   48771 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-206663"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:44:16.269915   48771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0603 11:44:16.280199   48771 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:44:16.280256   48771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 11:44:16.289533   48771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0603 11:44:16.305906   48771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:44:16.321720   48771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0603 11:44:16.337861   48771 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I0603 11:44:16.341462   48771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:44:16.353290   48771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:44:16.481043   48771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:44:16.498013   48771 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663 for IP: 192.168.39.137
	I0603 11:44:16.498039   48771 certs.go:194] generating shared ca certs ...
	I0603 11:44:16.498058   48771 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:44:16.498241   48771 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:44:16.498294   48771 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:44:16.498304   48771 certs.go:256] generating profile certs ...
	I0603 11:44:16.498411   48771 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/client.key
	I0603 11:44:16.498493   48771 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/apiserver.key.85be90aa
	I0603 11:44:16.498568   48771 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/proxy-client.key
	I0603 11:44:16.498725   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:44:16.498775   48771 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:44:16.498791   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:44:16.498822   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:44:16.498861   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:44:16.498892   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:44:16.498965   48771 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:44:16.499881   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:44:16.541043   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:44:16.574304   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:44:16.601239   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:44:16.627992   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 11:44:16.655760   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 11:44:16.690983   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:44:16.722520   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 11:44:16.745011   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:44:16.766943   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:44:16.789018   48771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:44:16.811174   48771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:44:16.826989   48771 ssh_runner.go:195] Run: openssl version
	I0603 11:44:16.832665   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:44:16.843131   48771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:44:16.847646   48771 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:44:16.847705   48771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:44:16.853458   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:44:16.863859   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:44:16.874070   48771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:44:16.878480   48771 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:44:16.878525   48771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:44:16.883951   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:44:16.894624   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:44:16.905078   48771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:44:16.909261   48771 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:44:16.909316   48771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:44:16.914771   48771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:44:16.925530   48771 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:44:16.929924   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 11:44:16.935677   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 11:44:16.941461   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 11:44:16.947326   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 11:44:16.953010   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 11:44:16.958640   48771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 11:44:16.964183   48771 kubeadm.go:391] StartCluster: {Name:test-preload-206663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-206663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:44:16.964252   48771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:44:16.964281   48771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:44:17.003119   48771 cri.go:89] found id: ""
	I0603 11:44:17.003196   48771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 11:44:17.013344   48771 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 11:44:17.013365   48771 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 11:44:17.013370   48771 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 11:44:17.013423   48771 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 11:44:17.023246   48771 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:44:17.023681   48771 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-206663" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:44:17.023808   48771 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-206663" cluster setting kubeconfig missing "test-preload-206663" context setting]
	I0603 11:44:17.024218   48771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:44:17.024835   48771 kapi.go:59] client config for test-preload-206663: &rest.Config{Host:"https://192.168.39.137:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 11:44:17.025471   48771 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 11:44:17.034514   48771 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.137
	I0603 11:44:17.034543   48771 kubeadm.go:1154] stopping kube-system containers ...
	I0603 11:44:17.034565   48771 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 11:44:17.034597   48771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:44:17.073333   48771 cri.go:89] found id: ""
	I0603 11:44:17.073410   48771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 11:44:17.089320   48771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 11:44:17.099045   48771 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 11:44:17.099071   48771 kubeadm.go:156] found existing configuration files:
	
	I0603 11:44:17.099123   48771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 11:44:17.108238   48771 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 11:44:17.108281   48771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 11:44:17.117552   48771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 11:44:17.126256   48771 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 11:44:17.126295   48771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 11:44:17.135373   48771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 11:44:17.143958   48771 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 11:44:17.143995   48771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 11:44:17.153837   48771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 11:44:17.162519   48771 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 11:44:17.162572   48771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 11:44:17.171662   48771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 11:44:17.180998   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:17.277378   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:18.068536   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:18.308669   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:18.381739   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:18.474846   48771 api_server.go:52] waiting for apiserver process to appear ...
	I0603 11:44:18.474909   48771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:44:18.975562   48771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:44:19.475749   48771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:44:19.514832   48771 api_server.go:72] duration metric: took 1.039999395s to wait for apiserver process to appear ...
	I0603 11:44:19.514862   48771 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:44:19.514882   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:19.515358   48771 api_server.go:269] stopped: https://192.168.39.137:8443/healthz: Get "https://192.168.39.137:8443/healthz": dial tcp 192.168.39.137:8443: connect: connection refused
	I0603 11:44:20.015116   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:23.718295   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 11:44:23.718324   48771 api_server.go:103] status: https://192.168.39.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 11:44:23.718339   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:23.773222   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:44:23.773258   48771 api_server.go:103] status: https://192.168.39.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:44:24.015596   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:24.021670   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:44:24.021699   48771 api_server.go:103] status: https://192.168.39.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:44:24.515229   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:24.522833   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:44:24.522858   48771 api_server.go:103] status: https://192.168.39.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:44:25.015059   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:25.020409   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0603 11:44:25.026775   48771 api_server.go:141] control plane version: v1.24.4
	I0603 11:44:25.026799   48771 api_server.go:131] duration metric: took 5.511931212s to wait for apiserver health ...
	I0603 11:44:25.026808   48771 cni.go:84] Creating CNI manager for ""
	I0603 11:44:25.026814   48771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:44:25.028606   48771 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 11:44:25.029985   48771 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 11:44:25.061617   48771 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 11:44:25.099150   48771 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:44:25.108851   48771 system_pods.go:59] 7 kube-system pods found
	I0603 11:44:25.108884   48771 system_pods.go:61] "coredns-6d4b75cb6d-sshn8" [ef0e6792-9df3-4535-8699-ef7ac766f17e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 11:44:25.108890   48771 system_pods.go:61] "etcd-test-preload-206663" [16be736a-904f-4eab-a2a4-c6e768c6c8c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 11:44:25.108904   48771 system_pods.go:61] "kube-apiserver-test-preload-206663" [17ed5348-2884-48a4-9b08-5175e64cf67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 11:44:25.108912   48771 system_pods.go:61] "kube-controller-manager-test-preload-206663" [27084e67-4b7f-407a-bee3-bec3475c1e37] Running
	I0603 11:44:25.108921   48771 system_pods.go:61] "kube-proxy-2hftt" [8be9ba27-295e-4e8e-a8db-bf5fb20d19d9] Running
	I0603 11:44:25.108929   48771 system_pods.go:61] "kube-scheduler-test-preload-206663" [bb307b3e-d26f-4d11-9284-8392aafcf228] Running
	I0603 11:44:25.108936   48771 system_pods.go:61] "storage-provisioner" [17f52a67-13c4-489f-85c2-da5cf5381dd4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 11:44:25.108946   48771 system_pods.go:74] duration metric: took 9.773844ms to wait for pod list to return data ...
	I0603 11:44:25.108956   48771 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:44:25.113489   48771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:44:25.113528   48771 node_conditions.go:123] node cpu capacity is 2
	I0603 11:44:25.113541   48771 node_conditions.go:105] duration metric: took 4.580728ms to run NodePressure ...
	I0603 11:44:25.113562   48771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:44:25.393824   48771 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 11:44:25.398458   48771 kubeadm.go:733] kubelet initialised
	I0603 11:44:25.398484   48771 kubeadm.go:734] duration metric: took 4.630566ms waiting for restarted kubelet to initialise ...
	I0603 11:44:25.398495   48771 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:44:25.403714   48771 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:25.408873   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.408901   48771 pod_ready.go:81] duration metric: took 5.164989ms for pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:25.408913   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.408922   48771 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:25.413371   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "etcd-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.413408   48771 pod_ready.go:81] duration metric: took 4.469517ms for pod "etcd-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:25.413419   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "etcd-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.413431   48771 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:25.418058   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "kube-apiserver-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.418080   48771 pod_ready.go:81] duration metric: took 4.639016ms for pod "kube-apiserver-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:25.418090   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "kube-apiserver-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.418098   48771 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:25.503373   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.503418   48771 pod_ready.go:81] duration metric: took 85.305168ms for pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:25.503431   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.503440   48771 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2hftt" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:25.903183   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "kube-proxy-2hftt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.903217   48771 pod_ready.go:81] duration metric: took 399.765462ms for pod "kube-proxy-2hftt" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:25.903230   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "kube-proxy-2hftt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:25.903239   48771 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:26.302779   48771 pod_ready.go:97] node "test-preload-206663" hosting pod "kube-scheduler-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:26.302812   48771 pod_ready.go:81] duration metric: took 399.564621ms for pod "kube-scheduler-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	E0603 11:44:26.302824   48771 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-206663" hosting pod "kube-scheduler-test-preload-206663" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:26.302833   48771 pod_ready.go:38] duration metric: took 904.328256ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:44:26.302858   48771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 11:44:26.316156   48771 ops.go:34] apiserver oom_adj: -16
	I0603 11:44:26.316176   48771 kubeadm.go:591] duration metric: took 9.302800041s to restartPrimaryControlPlane
	I0603 11:44:26.316186   48771 kubeadm.go:393] duration metric: took 9.352005789s to StartCluster
	I0603 11:44:26.316206   48771 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:44:26.316280   48771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:44:26.317158   48771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:44:26.317435   48771 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:44:26.319988   48771 out.go:177] * Verifying Kubernetes components...
	I0603 11:44:26.317503   48771 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 11:44:26.317669   48771 config.go:182] Loaded profile config "test-preload-206663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0603 11:44:26.320027   48771 addons.go:69] Setting storage-provisioner=true in profile "test-preload-206663"
	I0603 11:44:26.320046   48771 addons.go:234] Setting addon storage-provisioner=true in "test-preload-206663"
	W0603 11:44:26.320055   48771 addons.go:243] addon storage-provisioner should already be in state true
	I0603 11:44:26.321303   48771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:44:26.320093   48771 addons.go:69] Setting default-storageclass=true in profile "test-preload-206663"
	I0603 11:44:26.321345   48771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-206663"
	I0603 11:44:26.320094   48771 host.go:66] Checking if "test-preload-206663" exists ...
	I0603 11:44:26.321647   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:44:26.321691   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:44:26.321767   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:44:26.321807   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:44:26.336969   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
	I0603 11:44:26.336969   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0603 11:44:26.337460   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:44:26.337516   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:44:26.337969   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:44:26.337993   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:44:26.338009   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:44:26.338054   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:44:26.338316   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:44:26.338378   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:44:26.338557   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetState
	I0603 11:44:26.338856   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:44:26.338904   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:44:26.341014   48771 kapi.go:59] client config for test-preload-206663: &rest.Config{Host:"https://192.168.39.137:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/test-preload-206663/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 11:44:26.341355   48771 addons.go:234] Setting addon default-storageclass=true in "test-preload-206663"
	W0603 11:44:26.341380   48771 addons.go:243] addon default-storageclass should already be in state true
	I0603 11:44:26.341405   48771 host.go:66] Checking if "test-preload-206663" exists ...
	I0603 11:44:26.341804   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:44:26.341848   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:44:26.352838   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0603 11:44:26.353257   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:44:26.353716   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:44:26.353745   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:44:26.354035   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:44:26.354256   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetState
	I0603 11:44:26.355912   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:26.355972   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0603 11:44:26.357940   48771 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:44:26.356364   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:44:26.359427   48771 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 11:44:26.359448   48771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 11:44:26.359465   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:26.359785   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:44:26.359804   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:44:26.360161   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:44:26.360752   48771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:44:26.360796   48771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:44:26.362318   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:26.362783   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:26.362812   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:26.362984   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:26.363171   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:26.363345   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:26.363515   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:26.375207   48771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0603 11:44:26.375579   48771 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:44:26.376056   48771 main.go:141] libmachine: Using API Version  1
	I0603 11:44:26.376078   48771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:44:26.376353   48771 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:44:26.376559   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetState
	I0603 11:44:26.377812   48771 main.go:141] libmachine: (test-preload-206663) Calling .DriverName
	I0603 11:44:26.378038   48771 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 11:44:26.378056   48771 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 11:44:26.378074   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHHostname
	I0603 11:44:26.380508   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:26.380872   48771 main.go:141] libmachine: (test-preload-206663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:a3:a5", ip: ""} in network mk-test-preload-206663: {Iface:virbr1 ExpiryTime:2024-06-03 12:40:30 +0000 UTC Type:0 Mac:52:54:00:27:a3:a5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:test-preload-206663 Clientid:01:52:54:00:27:a3:a5}
	I0603 11:44:26.380900   48771 main.go:141] libmachine: (test-preload-206663) DBG | domain test-preload-206663 has defined IP address 192.168.39.137 and MAC address 52:54:00:27:a3:a5 in network mk-test-preload-206663
	I0603 11:44:26.380993   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHPort
	I0603 11:44:26.381159   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHKeyPath
	I0603 11:44:26.381294   48771 main.go:141] libmachine: (test-preload-206663) Calling .GetSSHUsername
	I0603 11:44:26.381413   48771 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/test-preload-206663/id_rsa Username:docker}
	I0603 11:44:26.506563   48771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:44:26.522925   48771 node_ready.go:35] waiting up to 6m0s for node "test-preload-206663" to be "Ready" ...
	I0603 11:44:26.614721   48771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 11:44:26.636422   48771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 11:44:27.574932   48771 main.go:141] libmachine: Making call to close driver server
	I0603 11:44:27.574956   48771 main.go:141] libmachine: (test-preload-206663) Calling .Close
	I0603 11:44:27.575003   48771 main.go:141] libmachine: Making call to close driver server
	I0603 11:44:27.575022   48771 main.go:141] libmachine: (test-preload-206663) Calling .Close
	I0603 11:44:27.575277   48771 main.go:141] libmachine: (test-preload-206663) DBG | Closing plugin on server side
	I0603 11:44:27.575305   48771 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:44:27.575316   48771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:44:27.575318   48771 main.go:141] libmachine: (test-preload-206663) DBG | Closing plugin on server side
	I0603 11:44:27.575321   48771 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:44:27.575342   48771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:44:27.575354   48771 main.go:141] libmachine: Making call to close driver server
	I0603 11:44:27.575364   48771 main.go:141] libmachine: (test-preload-206663) Calling .Close
	I0603 11:44:27.575330   48771 main.go:141] libmachine: Making call to close driver server
	I0603 11:44:27.575413   48771 main.go:141] libmachine: (test-preload-206663) Calling .Close
	I0603 11:44:27.575601   48771 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:44:27.575618   48771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:44:27.575607   48771 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:44:27.575631   48771 main.go:141] libmachine: (test-preload-206663) DBG | Closing plugin on server side
	I0603 11:44:27.575638   48771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:44:27.575630   48771 main.go:141] libmachine: (test-preload-206663) DBG | Closing plugin on server side
	I0603 11:44:27.580834   48771 main.go:141] libmachine: Making call to close driver server
	I0603 11:44:27.580856   48771 main.go:141] libmachine: (test-preload-206663) Calling .Close
	I0603 11:44:27.581070   48771 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:44:27.581085   48771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:44:27.581101   48771 main.go:141] libmachine: (test-preload-206663) DBG | Closing plugin on server side
	I0603 11:44:27.583258   48771 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 11:44:27.584472   48771 addons.go:510] duration metric: took 1.266981733s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 11:44:28.526576   48771 node_ready.go:53] node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:30.526792   48771 node_ready.go:53] node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:33.026790   48771 node_ready.go:53] node "test-preload-206663" has status "Ready":"False"
	I0603 11:44:34.026887   48771 node_ready.go:49] node "test-preload-206663" has status "Ready":"True"
	I0603 11:44:34.026911   48771 node_ready.go:38] duration metric: took 7.503950163s for node "test-preload-206663" to be "Ready" ...
	I0603 11:44:34.026920   48771 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:44:34.031403   48771 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:34.036546   48771 pod_ready.go:92] pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:34.036568   48771 pod_ready.go:81] duration metric: took 5.142689ms for pod "coredns-6d4b75cb6d-sshn8" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:34.036577   48771 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:35.043395   48771 pod_ready.go:92] pod "etcd-test-preload-206663" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:35.043422   48771 pod_ready.go:81] duration metric: took 1.006837307s for pod "etcd-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:35.043431   48771 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:37.051123   48771 pod_ready.go:102] pod "kube-apiserver-test-preload-206663" in "kube-system" namespace has status "Ready":"False"
	I0603 11:44:39.050476   48771 pod_ready.go:92] pod "kube-apiserver-test-preload-206663" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:39.050497   48771 pod_ready.go:81] duration metric: took 4.007059694s for pod "kube-apiserver-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.050506   48771 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.055288   48771 pod_ready.go:92] pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:39.055306   48771 pod_ready.go:81] duration metric: took 4.794571ms for pod "kube-controller-manager-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.055314   48771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hftt" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.060071   48771 pod_ready.go:92] pod "kube-proxy-2hftt" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:39.060085   48771 pod_ready.go:81] duration metric: took 4.755227ms for pod "kube-proxy-2hftt" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.060091   48771 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.064364   48771 pod_ready.go:92] pod "kube-scheduler-test-preload-206663" in "kube-system" namespace has status "Ready":"True"
	I0603 11:44:39.064379   48771 pod_ready.go:81] duration metric: took 4.281404ms for pod "kube-scheduler-test-preload-206663" in "kube-system" namespace to be "Ready" ...
	I0603 11:44:39.064389   48771 pod_ready.go:38] duration metric: took 5.037460519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 11:44:39.064405   48771 api_server.go:52] waiting for apiserver process to appear ...
	I0603 11:44:39.064453   48771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:44:39.081113   48771 api_server.go:72] duration metric: took 12.763647656s to wait for apiserver process to appear ...
	I0603 11:44:39.081136   48771 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:44:39.081147   48771 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0603 11:44:39.089180   48771 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0603 11:44:39.089997   48771 api_server.go:141] control plane version: v1.24.4
	I0603 11:44:39.090017   48771 api_server.go:131] duration metric: took 8.87463ms to wait for apiserver health ...
	I0603 11:44:39.090026   48771 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:44:39.095760   48771 system_pods.go:59] 7 kube-system pods found
	I0603 11:44:39.095784   48771 system_pods.go:61] "coredns-6d4b75cb6d-sshn8" [ef0e6792-9df3-4535-8699-ef7ac766f17e] Running
	I0603 11:44:39.095791   48771 system_pods.go:61] "etcd-test-preload-206663" [16be736a-904f-4eab-a2a4-c6e768c6c8c2] Running
	I0603 11:44:39.095796   48771 system_pods.go:61] "kube-apiserver-test-preload-206663" [17ed5348-2884-48a4-9b08-5175e64cf67b] Running
	I0603 11:44:39.095802   48771 system_pods.go:61] "kube-controller-manager-test-preload-206663" [27084e67-4b7f-407a-bee3-bec3475c1e37] Running
	I0603 11:44:39.095806   48771 system_pods.go:61] "kube-proxy-2hftt" [8be9ba27-295e-4e8e-a8db-bf5fb20d19d9] Running
	I0603 11:44:39.095810   48771 system_pods.go:61] "kube-scheduler-test-preload-206663" [bb307b3e-d26f-4d11-9284-8392aafcf228] Running
	I0603 11:44:39.095815   48771 system_pods.go:61] "storage-provisioner" [17f52a67-13c4-489f-85c2-da5cf5381dd4] Running
	I0603 11:44:39.095821   48771 system_pods.go:74] duration metric: took 5.789231ms to wait for pod list to return data ...
	I0603 11:44:39.095831   48771 default_sa.go:34] waiting for default service account to be created ...
	I0603 11:44:39.226961   48771 default_sa.go:45] found service account: "default"
	I0603 11:44:39.226987   48771 default_sa.go:55] duration metric: took 131.148024ms for default service account to be created ...
	I0603 11:44:39.226995   48771 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 11:44:39.430920   48771 system_pods.go:86] 7 kube-system pods found
	I0603 11:44:39.430945   48771 system_pods.go:89] "coredns-6d4b75cb6d-sshn8" [ef0e6792-9df3-4535-8699-ef7ac766f17e] Running
	I0603 11:44:39.430950   48771 system_pods.go:89] "etcd-test-preload-206663" [16be736a-904f-4eab-a2a4-c6e768c6c8c2] Running
	I0603 11:44:39.430954   48771 system_pods.go:89] "kube-apiserver-test-preload-206663" [17ed5348-2884-48a4-9b08-5175e64cf67b] Running
	I0603 11:44:39.430959   48771 system_pods.go:89] "kube-controller-manager-test-preload-206663" [27084e67-4b7f-407a-bee3-bec3475c1e37] Running
	I0603 11:44:39.430963   48771 system_pods.go:89] "kube-proxy-2hftt" [8be9ba27-295e-4e8e-a8db-bf5fb20d19d9] Running
	I0603 11:44:39.430967   48771 system_pods.go:89] "kube-scheduler-test-preload-206663" [bb307b3e-d26f-4d11-9284-8392aafcf228] Running
	I0603 11:44:39.430972   48771 system_pods.go:89] "storage-provisioner" [17f52a67-13c4-489f-85c2-da5cf5381dd4] Running
	I0603 11:44:39.430979   48771 system_pods.go:126] duration metric: took 203.978985ms to wait for k8s-apps to be running ...
	I0603 11:44:39.430985   48771 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 11:44:39.431025   48771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:44:39.448374   48771 system_svc.go:56] duration metric: took 17.380119ms WaitForService to wait for kubelet
	I0603 11:44:39.448401   48771 kubeadm.go:576] duration metric: took 13.130938569s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:44:39.448416   48771 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:44:39.627944   48771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:44:39.628006   48771 node_conditions.go:123] node cpu capacity is 2
	I0603 11:44:39.628018   48771 node_conditions.go:105] duration metric: took 179.598531ms to run NodePressure ...
	I0603 11:44:39.628028   48771 start.go:240] waiting for startup goroutines ...
	I0603 11:44:39.628059   48771 start.go:245] waiting for cluster config update ...
	I0603 11:44:39.628068   48771 start.go:254] writing updated cluster config ...
	I0603 11:44:39.628316   48771 ssh_runner.go:195] Run: rm -f paused
	I0603 11:44:39.677454   48771 start.go:600] kubectl: 1.30.1, cluster: 1.24.4 (minor skew: 6)
	I0603 11:44:39.679295   48771 out.go:177] 
	W0603 11:44:39.680429   48771 out.go:239] ! /usr/local/bin/kubectl is version 1.30.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0603 11:44:39.681561   48771 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0603 11:44:39.682718   48771 out.go:177] * Done! kubectl is now configured to use "test-preload-206663" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.581692553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ccc7f42-2008-44a0-86b1-14a33614f3ef name=/runtime.v1.RuntimeService/Version
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.583048231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fce53c03-2f61-4f64-ade4-9b2e52d8e674 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.583575066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415080583448370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fce53c03-2f61-4f64-ade4-9b2e52d8e674 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.584159121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=662f014d-a5df-4f28-863b-510264fbd593 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.584207410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=662f014d-a5df-4f28-863b-510264fbd593 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.584415980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3fbaa57972149b746a1a76e8d14fc6cf2440466a7e17be36934b5908ee5b9b,PodSandboxId:42ab9d0687019b0c090ea07897f997e1e8d29842e9438ff63fda5b399880f0bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1717415072678628220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef0e6792-9df3-4535-8699-ef7ac766f17e,},Annotations:map[string]string{io.kubernetes.container.hash: faaafe07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd379e5eda2a0d8346b5dce1f5b9395e8e45e4e306b08749b78d3cc98662ea39,PodSandboxId:bb7c89b15e9327978a6afe6d25e23cb985ea9b1f49c837fe6cf4b55562ab390d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717415065461944752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 17f52a67-13c4-489f-85c2-da5cf5381dd4,},Annotations:map[string]string{io.kubernetes.container.hash: eae73447,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8068aeb7a3e3ef119f78966573982f1541ca239a12759bdec2a2c8028c3270,PodSandboxId:71e5190d5e8b850d34493096fa8d4b95079c73236c71a732b3190308210e86f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1717415065173888778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hftt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
e9ba27-295e-4e8e-a8db-bf5fb20d19d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7a121f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ab593074b87227744ec8f0c7ccd988e19a381d1bd61f551cf7acfea0e0ecc4,PodSandboxId:20734f1cc0b3cba0ffe0ea28cd4c16d8b2d60502ff1b2b0b09142364a56bf2b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1717415059212741265,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa9d82330b7513ce2cf7b8924e555c42,},Anno
tations:map[string]string{io.kubernetes.container.hash: 918d90d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c6257b54751376e3d64b6859b30668c50f9412f141cb4c8115a6580b0a622e,PodSandboxId:8615e4ae64a0c325714c1d555235781b036e6f695a53ce4a4c1ed645914e64ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1717415059206758275,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101c296ccd756f14cd0b4c18
bee783f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2400caf85af915c92c3989c7caa1cdd51e2e6b14dbd223cefb248706e4b58d,PodSandboxId:831cb4a6e30d0cbc94023afedf7abcd65f8afcc0b40ee141b737b89533e7589f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1717415059185420696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ca06cf3129b7be618893182f5ec0bfe,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbade7ae5d1dba7e7623eb5bb8385b70b094c449dd7f3eeb3390086aaf96921,PodSandboxId:c51c203a73c35271cf9e3ffa732d5ab60b82cc8b1515114f57f13977c3656213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1717415059147438852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd6bf8d5b0dd05156551c0e70541a58,},Annotations
:map[string]string{io.kubernetes.container.hash: 4a0d3a4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=662f014d-a5df-4f28-863b-510264fbd593 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.621248861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4685a708-cb01-4ee5-9ed9-5a2166b5e467 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.621357918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4685a708-cb01-4ee5-9ed9-5a2166b5e467 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.622552832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37320f55-ac00-4ad0-b0e4-e3fe96be758e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.623015441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415080622993889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37320f55-ac00-4ad0-b0e4-e3fe96be758e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.623416898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=189f19da-4e8d-4949-96e1-c1bd8e093057 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.623549510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=189f19da-4e8d-4949-96e1-c1bd8e093057 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.623764440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3fbaa57972149b746a1a76e8d14fc6cf2440466a7e17be36934b5908ee5b9b,PodSandboxId:42ab9d0687019b0c090ea07897f997e1e8d29842e9438ff63fda5b399880f0bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1717415072678628220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef0e6792-9df3-4535-8699-ef7ac766f17e,},Annotations:map[string]string{io.kubernetes.container.hash: faaafe07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd379e5eda2a0d8346b5dce1f5b9395e8e45e4e306b08749b78d3cc98662ea39,PodSandboxId:bb7c89b15e9327978a6afe6d25e23cb985ea9b1f49c837fe6cf4b55562ab390d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717415065461944752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 17f52a67-13c4-489f-85c2-da5cf5381dd4,},Annotations:map[string]string{io.kubernetes.container.hash: eae73447,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8068aeb7a3e3ef119f78966573982f1541ca239a12759bdec2a2c8028c3270,PodSandboxId:71e5190d5e8b850d34493096fa8d4b95079c73236c71a732b3190308210e86f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1717415065173888778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hftt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
e9ba27-295e-4e8e-a8db-bf5fb20d19d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7a121f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ab593074b87227744ec8f0c7ccd988e19a381d1bd61f551cf7acfea0e0ecc4,PodSandboxId:20734f1cc0b3cba0ffe0ea28cd4c16d8b2d60502ff1b2b0b09142364a56bf2b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1717415059212741265,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa9d82330b7513ce2cf7b8924e555c42,},Anno
tations:map[string]string{io.kubernetes.container.hash: 918d90d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c6257b54751376e3d64b6859b30668c50f9412f141cb4c8115a6580b0a622e,PodSandboxId:8615e4ae64a0c325714c1d555235781b036e6f695a53ce4a4c1ed645914e64ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1717415059206758275,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101c296ccd756f14cd0b4c18
bee783f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2400caf85af915c92c3989c7caa1cdd51e2e6b14dbd223cefb248706e4b58d,PodSandboxId:831cb4a6e30d0cbc94023afedf7abcd65f8afcc0b40ee141b737b89533e7589f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1717415059185420696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ca06cf3129b7be618893182f5ec0bfe,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbade7ae5d1dba7e7623eb5bb8385b70b094c449dd7f3eeb3390086aaf96921,PodSandboxId:c51c203a73c35271cf9e3ffa732d5ab60b82cc8b1515114f57f13977c3656213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1717415059147438852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd6bf8d5b0dd05156551c0e70541a58,},Annotations
:map[string]string{io.kubernetes.container.hash: 4a0d3a4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=189f19da-4e8d-4949-96e1-c1bd8e093057 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.632025461Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=10ca0ad9-78cf-4361-a970-b0335ebdc206 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.632226437Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:42ab9d0687019b0c090ea07897f997e1e8d29842e9438ff63fda5b399880f0bb,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-sshn8,Uid:ef0e6792-9df3-4535-8699-ef7ac766f17e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415072455960584,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-sshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef0e6792-9df3-4535-8699-ef7ac766f17e,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:44:24.427084327Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb7c89b15e9327978a6afe6d25e23cb985ea9b1f49c837fe6cf4b55562ab390d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:17f52a67-13c4-489f-85c2-da5cf5381dd4,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415065333592446,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f52a67-13c4-489f-85c2-da5cf5381dd4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T11:44:24.427067551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71e5190d5e8b850d34493096fa8d4b95079c73236c71a732b3190308210e86f3,Metadata:&PodSandboxMetadata{Name:kube-proxy-2hftt,Uid:8be9ba27-295e-4e8e-a8db-bf5fb20d19d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415065041163520,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2hftt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be9ba27-295e-4e8e-a8db-bf5fb20d19d9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T11:44:24.427089278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c51c203a73c35271cf9e3ffa732d5ab60b82cc8b1515114f57f13977c3656213,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-206663,Uid:0cd6bf8
d5b0dd05156551c0e70541a58,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415058991872995,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd6bf8d5b0dd05156551c0e70541a58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.137:8443,kubernetes.io/config.hash: 0cd6bf8d5b0dd05156551c0e70541a58,kubernetes.io/config.seen: 2024-06-03T11:44:18.440971808Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:831cb4a6e30d0cbc94023afedf7abcd65f8afcc0b40ee141b737b89533e7589f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-206663,Uid:2ca06cf3129b7be618893182f5ec0bfe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415058991260442,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-scheduler-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ca06cf3129b7be618893182f5ec0bfe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ca06cf3129b7be618893182f5ec0bfe,kubernetes.io/config.seen: 2024-06-03T11:44:18.441002546Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8615e4ae64a0c325714c1d555235781b036e6f695a53ce4a4c1ed645914e64ef,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-206663,Uid:101c296ccd756f14cd0b4c18bee783f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415058990979667,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101c296ccd756f14cd0b4c18bee783f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 101c296ccd756f14cd0b4c18bee783f6,kub
ernetes.io/config.seen: 2024-06-03T11:44:18.441001421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:20734f1cc0b3cba0ffe0ea28cd4c16d8b2d60502ff1b2b0b09142364a56bf2b6,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-206663,Uid:aa9d82330b7513ce2cf7b8924e555c42,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717415058989960099,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa9d82330b7513ce2cf7b8924e555c42,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.137:2379,kubernetes.io/config.hash: aa9d82330b7513ce2cf7b8924e555c42,kubernetes.io/config.seen: 2024-06-03T11:44:18.469699318Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=10ca0ad9-78cf-4361-a970-b0335ebdc206 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.633013016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=807ccbca-9916-49bd-b478-98b8149bbd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.633060814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=807ccbca-9916-49bd-b478-98b8149bbd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.634918109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3fbaa57972149b746a1a76e8d14fc6cf2440466a7e17be36934b5908ee5b9b,PodSandboxId:42ab9d0687019b0c090ea07897f997e1e8d29842e9438ff63fda5b399880f0bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1717415072678628220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef0e6792-9df3-4535-8699-ef7ac766f17e,},Annotations:map[string]string{io.kubernetes.container.hash: faaafe07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd379e5eda2a0d8346b5dce1f5b9395e8e45e4e306b08749b78d3cc98662ea39,PodSandboxId:bb7c89b15e9327978a6afe6d25e23cb985ea9b1f49c837fe6cf4b55562ab390d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717415065461944752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 17f52a67-13c4-489f-85c2-da5cf5381dd4,},Annotations:map[string]string{io.kubernetes.container.hash: eae73447,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8068aeb7a3e3ef119f78966573982f1541ca239a12759bdec2a2c8028c3270,PodSandboxId:71e5190d5e8b850d34493096fa8d4b95079c73236c71a732b3190308210e86f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1717415065173888778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hftt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
e9ba27-295e-4e8e-a8db-bf5fb20d19d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7a121f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ab593074b87227744ec8f0c7ccd988e19a381d1bd61f551cf7acfea0e0ecc4,PodSandboxId:20734f1cc0b3cba0ffe0ea28cd4c16d8b2d60502ff1b2b0b09142364a56bf2b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1717415059212741265,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa9d82330b7513ce2cf7b8924e555c42,},Anno
tations:map[string]string{io.kubernetes.container.hash: 918d90d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c6257b54751376e3d64b6859b30668c50f9412f141cb4c8115a6580b0a622e,PodSandboxId:8615e4ae64a0c325714c1d555235781b036e6f695a53ce4a4c1ed645914e64ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1717415059206758275,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101c296ccd756f14cd0b4c18
bee783f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2400caf85af915c92c3989c7caa1cdd51e2e6b14dbd223cefb248706e4b58d,PodSandboxId:831cb4a6e30d0cbc94023afedf7abcd65f8afcc0b40ee141b737b89533e7589f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1717415059185420696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ca06cf3129b7be618893182f5ec0bfe,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbade7ae5d1dba7e7623eb5bb8385b70b094c449dd7f3eeb3390086aaf96921,PodSandboxId:c51c203a73c35271cf9e3ffa732d5ab60b82cc8b1515114f57f13977c3656213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1717415059147438852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd6bf8d5b0dd05156551c0e70541a58,},Annotations
:map[string]string{io.kubernetes.container.hash: 4a0d3a4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=807ccbca-9916-49bd-b478-98b8149bbd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.658269513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cced11c7-c5ae-40e1-adaf-c5be9e10ae7c name=/runtime.v1.RuntimeService/Version
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.658338066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cced11c7-c5ae-40e1-adaf-c5be9e10ae7c name=/runtime.v1.RuntimeService/Version
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.659258119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b150f27-4454-45f7-8dab-c046d0998aea name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.659737861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415080659719153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b150f27-4454-45f7-8dab-c046d0998aea name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.660195352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8f22968-d893-4983-957b-30ed72abdaf8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.660242291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8f22968-d893-4983-957b-30ed72abdaf8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:44:40 test-preload-206663 crio[680]: time="2024-06-03 11:44:40.660642568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3fbaa57972149b746a1a76e8d14fc6cf2440466a7e17be36934b5908ee5b9b,PodSandboxId:42ab9d0687019b0c090ea07897f997e1e8d29842e9438ff63fda5b399880f0bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1717415072678628220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef0e6792-9df3-4535-8699-ef7ac766f17e,},Annotations:map[string]string{io.kubernetes.container.hash: faaafe07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd379e5eda2a0d8346b5dce1f5b9395e8e45e4e306b08749b78d3cc98662ea39,PodSandboxId:bb7c89b15e9327978a6afe6d25e23cb985ea9b1f49c837fe6cf4b55562ab390d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717415065461944752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 17f52a67-13c4-489f-85c2-da5cf5381dd4,},Annotations:map[string]string{io.kubernetes.container.hash: eae73447,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8068aeb7a3e3ef119f78966573982f1541ca239a12759bdec2a2c8028c3270,PodSandboxId:71e5190d5e8b850d34493096fa8d4b95079c73236c71a732b3190308210e86f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1717415065173888778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hftt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
e9ba27-295e-4e8e-a8db-bf5fb20d19d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7a121f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ab593074b87227744ec8f0c7ccd988e19a381d1bd61f551cf7acfea0e0ecc4,PodSandboxId:20734f1cc0b3cba0ffe0ea28cd4c16d8b2d60502ff1b2b0b09142364a56bf2b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1717415059212741265,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa9d82330b7513ce2cf7b8924e555c42,},Anno
tations:map[string]string{io.kubernetes.container.hash: 918d90d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c6257b54751376e3d64b6859b30668c50f9412f141cb4c8115a6580b0a622e,PodSandboxId:8615e4ae64a0c325714c1d555235781b036e6f695a53ce4a4c1ed645914e64ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1717415059206758275,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101c296ccd756f14cd0b4c18
bee783f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2400caf85af915c92c3989c7caa1cdd51e2e6b14dbd223cefb248706e4b58d,PodSandboxId:831cb4a6e30d0cbc94023afedf7abcd65f8afcc0b40ee141b737b89533e7589f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1717415059185420696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ca06cf3129b7be618893182f5ec0bfe,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbade7ae5d1dba7e7623eb5bb8385b70b094c449dd7f3eeb3390086aaf96921,PodSandboxId:c51c203a73c35271cf9e3ffa732d5ab60b82cc8b1515114f57f13977c3656213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1717415059147438852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-206663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd6bf8d5b0dd05156551c0e70541a58,},Annotations
:map[string]string{io.kubernetes.container.hash: 4a0d3a4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8f22968-d893-4983-957b-30ed72abdaf8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f3fbaa579721       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   42ab9d0687019       coredns-6d4b75cb6d-sshn8
	fd379e5eda2a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   bb7c89b15e932       storage-provisioner
	8f8068aeb7a3e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   71e5190d5e8b8       kube-proxy-2hftt
	34ab593074b87       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   20734f1cc0b3c       etcd-test-preload-206663
	69c6257b54751       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   8615e4ae64a0c       kube-controller-manager-test-preload-206663
	9e2400caf85af       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   831cb4a6e30d0       kube-scheduler-test-preload-206663
	7cbade7ae5d1d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   c51c203a73c35       kube-apiserver-test-preload-206663
	
	
	==> coredns [4f3fbaa57972149b746a1a76e8d14fc6cf2440466a7e17be36934b5908ee5b9b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44831 - 17205 "HINFO IN 4277779524429112376.6926445905622771728. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014540506s
	
	
	==> describe nodes <==
	Name:               test-preload-206663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-206663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=test-preload-206663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T11_42_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:42:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-206663
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:44:33 +0000   Mon, 03 Jun 2024 11:42:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:44:33 +0000   Mon, 03 Jun 2024 11:42:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:44:33 +0000   Mon, 03 Jun 2024 11:42:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:44:33 +0000   Mon, 03 Jun 2024 11:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    test-preload-206663
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9a98fb4ab3347a384e8678a98fef6a7
	  System UUID:                c9a98fb4-ab33-47a3-84e8-678a98fef6a7
	  Boot ID:                    213e17d3-4943-4c59-ac1e-13cd8a9a0574
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-sshn8                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     97s
	  kube-system                 etcd-test-preload-206663                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kube-apiserver-test-preload-206663             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-test-preload-206663    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-proxy-2hftt                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-test-preload-206663             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  117s (x5 over 117s)  kubelet          Node test-preload-206663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x4 over 117s)  kubelet          Node test-preload-206663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x4 over 117s)  kubelet          Node test-preload-206663 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node test-preload-206663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node test-preload-206663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node test-preload-206663 status is now: NodeHasSufficientPID
	  Normal  NodeReady                100s                 kubelet          Node test-preload-206663 status is now: NodeReady
	  Normal  RegisteredNode           98s                  node-controller  Node test-preload-206663 event: Registered Node test-preload-206663 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-206663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-206663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-206663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-206663 event: Registered Node test-preload-206663 in Controller
	
	
	==> dmesg <==
	[Jun 3 11:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051489] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041302] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.489449] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.296602] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.592587] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 11:44] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.053984] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054322] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.158195] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.137218] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.261511] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[ +12.937400] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.063806] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.760794] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +4.763830] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.404391] systemd-fstab-generator[1713]: Ignoring "noauto" option for root device
	[  +6.070864] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [34ab593074b87227744ec8f0c7ccd988e19a381d1bd61f551cf7acfea0e0ecc4] <==
	{"level":"info","ts":"2024-06-03T11:44:19.625Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"5527995f6263874a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-03T11:44:19.628Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-03T11:44:19.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858)"}
	{"level":"info","ts":"2024-06-03T11:44:19.631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","added-peer-id":"5527995f6263874a","added-peer-peer-urls":["https://192.168.39.137:2380"]}
	{"level":"info","ts":"2024-06-03T11:44:19.632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:44:19.632Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:44:19.635Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T11:44:19.637Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5527995f6263874a","initial-advertise-peer-urls":["https://192.168.39.137:2380"],"listen-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T11:44:19.637Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T11:44:19.638Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-06-03T11:44:19.638Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 2"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 3"}
	{"level":"info","ts":"2024-06-03T11:44:21.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-06-03T11:44:21.295Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:test-preload-206663 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:44:21.295Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:44:21.296Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:44:21.297Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.137:2379"}
	{"level":"info","ts":"2024-06-03T11:44:21.297Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:44:21.297Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:44:21.297Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:44:40 up 0 min,  0 users,  load average: 0.96, 0.27, 0.09
	Linux test-preload-206663 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7cbade7ae5d1dba7e7623eb5bb8385b70b094c449dd7f3eeb3390086aaf96921] <==
	I0603 11:44:23.685889       1 controller.go:85] Starting OpenAPI controller
	I0603 11:44:23.685935       1 controller.go:85] Starting OpenAPI V3 controller
	I0603 11:44:23.685963       1 naming_controller.go:291] Starting NamingConditionController
	I0603 11:44:23.686173       1 establishing_controller.go:76] Starting EstablishingController
	I0603 11:44:23.686226       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 11:44:23.686260       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 11:44:23.686305       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 11:44:23.763800       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:44:23.764553       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:44:23.773321       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0603 11:44:23.778033       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0603 11:44:23.779647       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0603 11:44:23.780373       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:44:23.784947       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0603 11:44:23.834242       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 11:44:24.336226       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0603 11:44:24.675537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 11:44:25.265531       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0603 11:44:25.280141       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0603 11:44:25.344301       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0603 11:44:25.369149       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 11:44:25.380778       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 11:44:25.631677       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0603 11:44:36.861377       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 11:44:36.973604       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [69c6257b54751376e3d64b6859b30668c50f9412f141cb4c8115a6580b0a622e] <==
	I0603 11:44:36.856846       1 shared_informer.go:262] Caches are synced for TTL
	I0603 11:44:36.859162       1 shared_informer.go:262] Caches are synced for ephemeral
	I0603 11:44:36.859213       1 shared_informer.go:262] Caches are synced for PVC protection
	I0603 11:44:36.866639       1 shared_informer.go:262] Caches are synced for namespace
	I0603 11:44:36.869995       1 shared_informer.go:262] Caches are synced for expand
	I0603 11:44:36.875809       1 shared_informer.go:262] Caches are synced for daemon sets
	I0603 11:44:36.885145       1 shared_informer.go:262] Caches are synced for deployment
	I0603 11:44:36.885240       1 shared_informer.go:262] Caches are synced for HPA
	I0603 11:44:36.886526       1 shared_informer.go:262] Caches are synced for taint
	I0603 11:44:36.886642       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0603 11:44:36.886811       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-206663. Assuming now as a timestamp.
	I0603 11:44:36.886865       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0603 11:44:36.887239       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0603 11:44:36.887566       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0603 11:44:36.887987       1 event.go:294] "Event occurred" object="test-preload-206663" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-206663 event: Registered Node test-preload-206663 in Controller"
	I0603 11:44:36.893083       1 shared_informer.go:262] Caches are synced for attach detach
	I0603 11:44:36.935373       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0603 11:44:36.963094       1 shared_informer.go:262] Caches are synced for endpoint
	I0603 11:44:36.985265       1 shared_informer.go:262] Caches are synced for crt configmap
	I0603 11:44:36.985329       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0603 11:44:37.021802       1 shared_informer.go:262] Caches are synced for resource quota
	I0603 11:44:37.074303       1 shared_informer.go:262] Caches are synced for resource quota
	I0603 11:44:37.473059       1 shared_informer.go:262] Caches are synced for garbage collector
	I0603 11:44:37.473134       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0603 11:44:37.500169       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [8f8068aeb7a3e3ef119f78966573982f1541ca239a12759bdec2a2c8028c3270] <==
	I0603 11:44:25.582829       1 node.go:163] Successfully retrieved node IP: 192.168.39.137
	I0603 11:44:25.582954       1 server_others.go:138] "Detected node IP" address="192.168.39.137"
	I0603 11:44:25.583001       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0603 11:44:25.617998       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0603 11:44:25.618015       1 server_others.go:206] "Using iptables Proxier"
	I0603 11:44:25.618057       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0603 11:44:25.618881       1 server.go:661] "Version info" version="v1.24.4"
	I0603 11:44:25.618935       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:44:25.620731       1 config.go:317] "Starting service config controller"
	I0603 11:44:25.621104       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0603 11:44:25.621217       1 config.go:226] "Starting endpoint slice config controller"
	I0603 11:44:25.621258       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0603 11:44:25.626644       1 config.go:444] "Starting node config controller"
	I0603 11:44:25.626674       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0603 11:44:25.721963       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0603 11:44:25.722026       1 shared_informer.go:262] Caches are synced for service config
	I0603 11:44:25.727889       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [9e2400caf85af915c92c3989c7caa1cdd51e2e6b14dbd223cefb248706e4b58d] <==
	I0603 11:44:20.524934       1 serving.go:348] Generated self-signed cert in-memory
	W0603 11:44:23.775550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 11:44:23.775649       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:44:23.775677       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 11:44:23.775684       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 11:44:23.801213       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0603 11:44:23.801371       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:44:23.804916       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0603 11:44:23.804996       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 11:44:23.806422       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:44:23.805023       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:44:23.907355       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 11:44:23 test-preload-206663 kubelet[1080]: I0603 11:44:23.827431    1080 setters.go:532] "Node became not ready" node="test-preload-206663" condition={Type:Ready Status:False LastHeartbeatTime:2024-06-03 11:44:23.827371641 +0000 UTC m=+5.522093388 LastTransitionTime:2024-06-03 11:44:23.827371641 +0000 UTC m=+5.522093388 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jun 03 11:44:23 test-preload-206663 kubelet[1080]: E0603 11:44:23.988270    1080 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-test-preload-206663\" already exists" pod="kube-system/kube-controller-manager-test-preload-206663"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.422840    1080 apiserver.go:52] "Watching apiserver"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.427233    1080 topology_manager.go:200] "Topology Admit Handler"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.427313    1080 topology_manager.go:200] "Topology Admit Handler"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.427348    1080 topology_manager.go:200] "Topology Admit Handler"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: E0603 11:44:24.430277    1080 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-sshn8" podUID=ef0e6792-9df3-4535-8699-ef7ac766f17e
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496348    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8be9ba27-295e-4e8e-a8db-bf5fb20d19d9-lib-modules\") pod \"kube-proxy-2hftt\" (UID: \"8be9ba27-295e-4e8e-a8db-bf5fb20d19d9\") " pod="kube-system/kube-proxy-2hftt"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496409    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8be9ba27-295e-4e8e-a8db-bf5fb20d19d9-kube-proxy\") pod \"kube-proxy-2hftt\" (UID: \"8be9ba27-295e-4e8e-a8db-bf5fb20d19d9\") " pod="kube-system/kube-proxy-2hftt"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496433    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xsb\" (UniqueName: \"kubernetes.io/projected/17f52a67-13c4-489f-85c2-da5cf5381dd4-kube-api-access-p5xsb\") pod \"storage-provisioner\" (UID: \"17f52a67-13c4-489f-85c2-da5cf5381dd4\") " pod="kube-system/storage-provisioner"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496495    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/17f52a67-13c4-489f-85c2-da5cf5381dd4-tmp\") pod \"storage-provisioner\" (UID: \"17f52a67-13c4-489f-85c2-da5cf5381dd4\") " pod="kube-system/storage-provisioner"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496540    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume\") pod \"coredns-6d4b75cb6d-sshn8\" (UID: \"ef0e6792-9df3-4535-8699-ef7ac766f17e\") " pod="kube-system/coredns-6d4b75cb6d-sshn8"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496568    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmrk\" (UniqueName: \"kubernetes.io/projected/ef0e6792-9df3-4535-8699-ef7ac766f17e-kube-api-access-kvmrk\") pod \"coredns-6d4b75cb6d-sshn8\" (UID: \"ef0e6792-9df3-4535-8699-ef7ac766f17e\") " pod="kube-system/coredns-6d4b75cb6d-sshn8"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496585    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8be9ba27-295e-4e8e-a8db-bf5fb20d19d9-xtables-lock\") pod \"kube-proxy-2hftt\" (UID: \"8be9ba27-295e-4e8e-a8db-bf5fb20d19d9\") " pod="kube-system/kube-proxy-2hftt"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496608    1080 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wkx\" (UniqueName: \"kubernetes.io/projected/8be9ba27-295e-4e8e-a8db-bf5fb20d19d9-kube-api-access-s8wkx\") pod \"kube-proxy-2hftt\" (UID: \"8be9ba27-295e-4e8e-a8db-bf5fb20d19d9\") " pod="kube-system/kube-proxy-2hftt"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: I0603 11:44:24.496624    1080 reconciler.go:159] "Reconciler: start to sync state"
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: E0603 11:44:24.602439    1080 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 03 11:44:24 test-preload-206663 kubelet[1080]: E0603 11:44:24.602596    1080 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume podName:ef0e6792-9df3-4535-8699-ef7ac766f17e nodeName:}" failed. No retries permitted until 2024-06-03 11:44:25.10253701 +0000 UTC m=+6.797258775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume") pod "coredns-6d4b75cb6d-sshn8" (UID: "ef0e6792-9df3-4535-8699-ef7ac766f17e") : object "kube-system"/"coredns" not registered
	Jun 03 11:44:25 test-preload-206663 kubelet[1080]: E0603 11:44:25.105205    1080 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 03 11:44:25 test-preload-206663 kubelet[1080]: E0603 11:44:25.105331    1080 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume podName:ef0e6792-9df3-4535-8699-ef7ac766f17e nodeName:}" failed. No retries permitted until 2024-06-03 11:44:26.105315576 +0000 UTC m=+7.800037335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume") pod "coredns-6d4b75cb6d-sshn8" (UID: "ef0e6792-9df3-4535-8699-ef7ac766f17e") : object "kube-system"/"coredns" not registered
	Jun 03 11:44:26 test-preload-206663 kubelet[1080]: E0603 11:44:26.112134    1080 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 03 11:44:26 test-preload-206663 kubelet[1080]: E0603 11:44:26.112229    1080 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume podName:ef0e6792-9df3-4535-8699-ef7ac766f17e nodeName:}" failed. No retries permitted until 2024-06-03 11:44:28.112207504 +0000 UTC m=+9.806929251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume") pod "coredns-6d4b75cb6d-sshn8" (UID: "ef0e6792-9df3-4535-8699-ef7ac766f17e") : object "kube-system"/"coredns" not registered
	Jun 03 11:44:26 test-preload-206663 kubelet[1080]: E0603 11:44:26.548645    1080 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-sshn8" podUID=ef0e6792-9df3-4535-8699-ef7ac766f17e
	Jun 03 11:44:28 test-preload-206663 kubelet[1080]: E0603 11:44:28.125003    1080 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 03 11:44:28 test-preload-206663 kubelet[1080]: E0603 11:44:28.125130    1080 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume podName:ef0e6792-9df3-4535-8699-ef7ac766f17e nodeName:}" failed. No retries permitted until 2024-06-03 11:44:32.125115321 +0000 UTC m=+13.819837071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef0e6792-9df3-4535-8699-ef7ac766f17e-config-volume") pod "coredns-6d4b75cb6d-sshn8" (UID: "ef0e6792-9df3-4535-8699-ef7ac766f17e") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [fd379e5eda2a0d8346b5dce1f5b9395e8e45e4e306b08749b78d3cc98662ea39] <==
	I0603 11:44:25.569162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-206663 -n test-preload-206663
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-206663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-206663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-206663
--- FAIL: TestPreload (265.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m17.940267335s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-179482] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-179482" primary control-plane node in "kubernetes-upgrade-179482" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:50:38.202731   56023 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:50:38.202956   56023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:50:38.202964   56023 out.go:304] Setting ErrFile to fd 2...
	I0603 11:50:38.202968   56023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:50:38.203181   56023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:50:38.203688   56023 out.go:298] Setting JSON to false
	I0603 11:50:38.204537   56023 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5583,"bootTime":1717409855,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:50:38.204586   56023 start.go:139] virtualization: kvm guest
	I0603 11:50:38.206609   56023 out.go:177] * [kubernetes-upgrade-179482] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:50:38.207829   56023 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:50:38.207843   56023 notify.go:220] Checking for updates...
	I0603 11:50:38.209702   56023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:50:38.211236   56023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:50:38.212591   56023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:50:38.214165   56023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:50:38.215532   56023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:50:38.217452   56023 config.go:182] Loaded profile config "cert-expiration-949809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:50:38.217594   56023 config.go:182] Loaded profile config "cert-options-430151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:50:38.217711   56023 config.go:182] Loaded profile config "force-systemd-flag-339689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:50:38.217843   56023 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:50:38.254867   56023 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 11:50:38.256232   56023 start.go:297] selected driver: kvm2
	I0603 11:50:38.256260   56023 start.go:901] validating driver "kvm2" against <nil>
	I0603 11:50:38.256270   56023 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:50:38.257010   56023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:50:38.257098   56023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:50:38.272477   56023 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:50:38.272528   56023 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 11:50:38.272814   56023 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 11:50:38.272882   56023 cni.go:84] Creating CNI manager for ""
	I0603 11:50:38.272899   56023 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:50:38.272908   56023 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 11:50:38.272975   56023 start.go:340] cluster config:
	{Name:kubernetes-upgrade-179482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:50:38.273101   56023 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:50:38.274767   56023 out.go:177] * Starting "kubernetes-upgrade-179482" primary control-plane node in "kubernetes-upgrade-179482" cluster
	I0603 11:50:38.276049   56023 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 11:50:38.276080   56023 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 11:50:38.276096   56023 cache.go:56] Caching tarball of preloaded images
	I0603 11:50:38.276173   56023 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:50:38.276184   56023 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 11:50:38.276254   56023 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/config.json ...
	I0603 11:50:38.276272   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/config.json: {Name:mk0f4e3ceada421fd0139fc75a34234ff56684ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:50:38.276401   56023 start.go:360] acquireMachinesLock for kubernetes-upgrade-179482: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:51:24.227923   56023 start.go:364] duration metric: took 45.951497493s to acquireMachinesLock for "kubernetes-upgrade-179482"
	I0603 11:51:24.227999   56023 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-179482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:51:24.228114   56023 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 11:51:24.230439   56023 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 11:51:24.230687   56023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:51:24.230746   56023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:51:24.246810   56023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0603 11:51:24.247193   56023 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:51:24.247723   56023 main.go:141] libmachine: Using API Version  1
	I0603 11:51:24.247744   56023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:51:24.248134   56023 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:51:24.248371   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetMachineName
	I0603 11:51:24.248531   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:24.248703   56023 start.go:159] libmachine.API.Create for "kubernetes-upgrade-179482" (driver="kvm2")
	I0603 11:51:24.248732   56023 client.go:168] LocalClient.Create starting
	I0603 11:51:24.248772   56023 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 11:51:24.248809   56023 main.go:141] libmachine: Decoding PEM data...
	I0603 11:51:24.248830   56023 main.go:141] libmachine: Parsing certificate...
	I0603 11:51:24.248913   56023 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 11:51:24.248939   56023 main.go:141] libmachine: Decoding PEM data...
	I0603 11:51:24.248958   56023 main.go:141] libmachine: Parsing certificate...
	I0603 11:51:24.248979   56023 main.go:141] libmachine: Running pre-create checks...
	I0603 11:51:24.248991   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .PreCreateCheck
	I0603 11:51:24.249395   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetConfigRaw
	I0603 11:51:24.249773   56023 main.go:141] libmachine: Creating machine...
	I0603 11:51:24.249784   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Create
	I0603 11:51:24.249914   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Creating KVM machine...
	I0603 11:51:24.251070   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found existing default KVM network
	I0603 11:51:24.252053   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.251880   56625 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:d1:d4} reservation:<nil>}
	I0603 11:51:24.254348   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.254199   56625 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0603 11:51:24.255189   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.255100   56625 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:98:7e:e4} reservation:<nil>}
	I0603 11:51:24.256020   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.255942   56625 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000305220}
	I0603 11:51:24.256049   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | created network xml: 
	I0603 11:51:24.256061   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | <network>
	I0603 11:51:24.256083   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   <name>mk-kubernetes-upgrade-179482</name>
	I0603 11:51:24.256092   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   <dns enable='no'/>
	I0603 11:51:24.256103   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   
	I0603 11:51:24.256116   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0603 11:51:24.256130   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |     <dhcp>
	I0603 11:51:24.256146   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0603 11:51:24.256159   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |     </dhcp>
	I0603 11:51:24.256174   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   </ip>
	I0603 11:51:24.256189   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG |   
	I0603 11:51:24.256196   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | </network>
	I0603 11:51:24.256206   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | 
	I0603 11:51:24.261562   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | trying to create private KVM network mk-kubernetes-upgrade-179482 192.168.72.0/24...
	I0603 11:51:24.332151   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | private KVM network mk-kubernetes-upgrade-179482 192.168.72.0/24 created
	I0603 11:51:24.332182   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482 ...
	I0603 11:51:24.332196   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.332120   56625 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:51:24.332279   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 11:51:24.332350   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 11:51:24.562888   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.562751   56625 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa...
	I0603 11:51:24.602295   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.602163   56625 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/kubernetes-upgrade-179482.rawdisk...
	I0603 11:51:24.602329   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Writing magic tar header
	I0603 11:51:24.602350   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Writing SSH key tar header
	I0603 11:51:24.602369   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:24.602305   56625 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482 ...
	I0603 11:51:24.602471   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482
	I0603 11:51:24.602501   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482 (perms=drwx------)
	I0603 11:51:24.602517   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 11:51:24.602537   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 11:51:24.602556   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 11:51:24.602572   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 11:51:24.602586   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:51:24.602599   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 11:51:24.602619   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 11:51:24.602635   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 11:51:24.602648   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 11:51:24.602673   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Creating domain...
	I0603 11:51:24.602689   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home/jenkins
	I0603 11:51:24.602697   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Checking permissions on dir: /home
	I0603 11:51:24.602710   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Skipping /home - not owner
	I0603 11:51:24.603817   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) define libvirt domain using xml: 
	I0603 11:51:24.603840   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) <domain type='kvm'>
	I0603 11:51:24.603852   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <name>kubernetes-upgrade-179482</name>
	I0603 11:51:24.603860   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <memory unit='MiB'>2200</memory>
	I0603 11:51:24.603869   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <vcpu>2</vcpu>
	I0603 11:51:24.603880   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <features>
	I0603 11:51:24.603894   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <acpi/>
	I0603 11:51:24.603905   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <apic/>
	I0603 11:51:24.603918   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <pae/>
	I0603 11:51:24.603927   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     
	I0603 11:51:24.603933   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   </features>
	I0603 11:51:24.603944   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <cpu mode='host-passthrough'>
	I0603 11:51:24.603949   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   
	I0603 11:51:24.603955   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   </cpu>
	I0603 11:51:24.603961   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <os>
	I0603 11:51:24.603968   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <type>hvm</type>
	I0603 11:51:24.603974   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <boot dev='cdrom'/>
	I0603 11:51:24.603985   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <boot dev='hd'/>
	I0603 11:51:24.603998   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <bootmenu enable='no'/>
	I0603 11:51:24.604009   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   </os>
	I0603 11:51:24.604030   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   <devices>
	I0603 11:51:24.604064   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <disk type='file' device='cdrom'>
	I0603 11:51:24.604074   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/boot2docker.iso'/>
	I0603 11:51:24.604083   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <target dev='hdc' bus='scsi'/>
	I0603 11:51:24.604090   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <readonly/>
	I0603 11:51:24.604097   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </disk>
	I0603 11:51:24.604104   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <disk type='file' device='disk'>
	I0603 11:51:24.604113   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 11:51:24.604122   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/kubernetes-upgrade-179482.rawdisk'/>
	I0603 11:51:24.604129   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <target dev='hda' bus='virtio'/>
	I0603 11:51:24.604135   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </disk>
	I0603 11:51:24.604152   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <interface type='network'>
	I0603 11:51:24.604164   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <source network='mk-kubernetes-upgrade-179482'/>
	I0603 11:51:24.604187   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <model type='virtio'/>
	I0603 11:51:24.604197   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </interface>
	I0603 11:51:24.604207   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <interface type='network'>
	I0603 11:51:24.604219   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <source network='default'/>
	I0603 11:51:24.604232   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <model type='virtio'/>
	I0603 11:51:24.604245   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </interface>
	I0603 11:51:24.604266   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <serial type='pty'>
	I0603 11:51:24.604286   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <target port='0'/>
	I0603 11:51:24.604298   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </serial>
	I0603 11:51:24.604309   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <console type='pty'>
	I0603 11:51:24.604320   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <target type='serial' port='0'/>
	I0603 11:51:24.604330   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </console>
	I0603 11:51:24.604339   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     <rng model='virtio'>
	I0603 11:51:24.604352   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)       <backend model='random'>/dev/random</backend>
	I0603 11:51:24.604363   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     </rng>
	I0603 11:51:24.604376   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     
	I0603 11:51:24.604386   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)     
	I0603 11:51:24.604397   56023 main.go:141] libmachine: (kubernetes-upgrade-179482)   </devices>
	I0603 11:51:24.604407   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) </domain>
	I0603 11:51:24.604418   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) 
	I0603 11:51:24.608530   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:46:15:08 in network default
	I0603 11:51:24.609240   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Ensuring networks are active...
	I0603 11:51:24.609264   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:24.609982   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Ensuring network default is active
	I0603 11:51:24.610391   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Ensuring network mk-kubernetes-upgrade-179482 is active
	I0603 11:51:24.611102   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Getting domain xml...
	I0603 11:51:24.611912   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Creating domain...
	I0603 11:51:25.873032   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Waiting to get IP...
	I0603 11:51:25.874048   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:25.874640   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:25.874670   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:25.874618   56625 retry.go:31] will retry after 309.295966ms: waiting for machine to come up
	I0603 11:51:26.185358   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.185933   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.185956   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:26.185886   56625 retry.go:31] will retry after 361.076422ms: waiting for machine to come up
	I0603 11:51:26.548107   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.548749   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.548771   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:26.548697   56625 retry.go:31] will retry after 387.897339ms: waiting for machine to come up
	I0603 11:51:26.938303   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.938928   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:26.938958   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:26.938884   56625 retry.go:31] will retry after 533.221228ms: waiting for machine to come up
	I0603 11:51:27.473478   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:27.473871   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:27.473899   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:27.473803   56625 retry.go:31] will retry after 552.336198ms: waiting for machine to come up
	I0603 11:51:28.027659   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:28.028259   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:28.028318   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:28.028209   56625 retry.go:31] will retry after 846.982973ms: waiting for machine to come up
	I0603 11:51:28.877570   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:28.878089   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:28.878123   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:28.878024   56625 retry.go:31] will retry after 1.012329785s: waiting for machine to come up
	I0603 11:51:29.891891   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:29.892474   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:29.892519   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:29.892456   56625 retry.go:31] will retry after 1.353920476s: waiting for machine to come up
	I0603 11:51:31.247690   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:31.248251   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:31.248281   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:31.248197   56625 retry.go:31] will retry after 1.160871848s: waiting for machine to come up
	I0603 11:51:32.411928   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:32.412475   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:32.412496   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:32.412437   56625 retry.go:31] will retry after 1.638343131s: waiting for machine to come up
	I0603 11:51:34.053216   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:34.053727   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:34.053756   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:34.053685   56625 retry.go:31] will retry after 2.050232907s: waiting for machine to come up
	I0603 11:51:36.106291   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:36.106962   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:36.106985   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:36.106921   56625 retry.go:31] will retry after 2.88457286s: waiting for machine to come up
	I0603 11:51:38.993472   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:38.993939   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:38.993964   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:38.993886   56625 retry.go:31] will retry after 3.829484884s: waiting for machine to come up
	I0603 11:51:42.825152   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:42.825708   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find current IP address of domain kubernetes-upgrade-179482 in network mk-kubernetes-upgrade-179482
	I0603 11:51:42.825741   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | I0603 11:51:42.825657   56625 retry.go:31] will retry after 5.471890021s: waiting for machine to come up
	I0603 11:51:48.302589   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.303175   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has current primary IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.303197   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Found IP for machine: 192.168.72.223
	I0603 11:51:48.303209   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Reserving static IP address...
	I0603 11:51:48.303553   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-179482", mac: "52:54:00:4e:c9:d2", ip: "192.168.72.223"} in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.377067   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Reserved static IP address: 192.168.72.223
	I0603 11:51:48.377102   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Waiting for SSH to be available...
	I0603 11:51:48.377112   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Getting to WaitForSSH function...
	I0603 11:51:48.380038   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.380510   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.380558   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.380696   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Using SSH client type: external
	I0603 11:51:48.380726   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa (-rw-------)
	I0603 11:51:48.380761   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 11:51:48.380780   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | About to run SSH command:
	I0603 11:51:48.380795   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | exit 0
	I0603 11:51:48.502891   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | SSH cmd err, output: <nil>: 
	I0603 11:51:48.503128   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) KVM machine creation complete!
	I0603 11:51:48.503425   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetConfigRaw
	I0603 11:51:48.504102   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:48.504288   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:48.504471   56023 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 11:51:48.504488   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetState
	I0603 11:51:48.505744   56023 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 11:51:48.505756   56023 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 11:51:48.505761   56023 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 11:51:48.505767   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:48.508388   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.508797   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.508818   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.508930   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:48.509121   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.509276   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.509424   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:48.509584   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:48.509827   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:48.509846   56023 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 11:51:48.606427   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:51:48.606453   56023 main.go:141] libmachine: Detecting the provisioner...
	I0603 11:51:48.606460   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:48.609193   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.609706   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.609734   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.609897   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:48.610087   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.610268   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.610449   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:48.610626   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:48.610793   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:48.610809   56023 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 11:51:48.707637   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 11:51:48.707726   56023 main.go:141] libmachine: found compatible host: buildroot
	I0603 11:51:48.707735   56023 main.go:141] libmachine: Provisioning with buildroot...
	I0603 11:51:48.707743   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetMachineName
	I0603 11:51:48.707988   56023 buildroot.go:166] provisioning hostname "kubernetes-upgrade-179482"
	I0603 11:51:48.708016   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetMachineName
	I0603 11:51:48.708226   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:48.710754   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.711154   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.711193   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.711358   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:48.711529   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.711709   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.711876   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:48.712053   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:48.712245   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:48.712258   56023 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-179482 && echo "kubernetes-upgrade-179482" | sudo tee /etc/hostname
	I0603 11:51:48.827549   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-179482
	
	I0603 11:51:48.827581   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:48.830558   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.830930   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.830959   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.831183   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:48.831358   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.831523   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:48.831661   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:48.831810   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:48.831972   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:48.831989   56023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-179482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-179482/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-179482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:51:48.935613   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:51:48.935640   56023 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:51:48.935671   56023 buildroot.go:174] setting up certificates
	I0603 11:51:48.935679   56023 provision.go:84] configureAuth start
	I0603 11:51:48.935687   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetMachineName
	I0603 11:51:48.935953   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetIP
	I0603 11:51:48.938674   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.939019   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.939055   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.939256   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:48.941706   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.942085   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:48.942117   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:48.942246   56023 provision.go:143] copyHostCerts
	I0603 11:51:48.942327   56023 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:51:48.942344   56023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:51:48.942398   56023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:51:48.942489   56023 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:51:48.942500   56023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:51:48.942519   56023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:51:48.942584   56023 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:51:48.942594   56023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:51:48.942619   56023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:51:48.942696   56023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-179482 san=[127.0.0.1 192.168.72.223 kubernetes-upgrade-179482 localhost minikube]
	I0603 11:51:49.079712   56023 provision.go:177] copyRemoteCerts
	I0603 11:51:49.079773   56023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:51:49.079806   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.082698   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.083073   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.083108   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.083290   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.083433   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.083541   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.083684   56023 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:51:49.161613   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:51:49.187101   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0603 11:51:49.212224   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:51:49.236658   56023 provision.go:87] duration metric: took 300.966346ms to configureAuth
	I0603 11:51:49.236686   56023 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:51:49.236848   56023 config.go:182] Loaded profile config "kubernetes-upgrade-179482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 11:51:49.236909   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.239919   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.240270   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.240300   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.240449   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.240652   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.240835   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.240995   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.241153   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:49.241366   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:49.241381   56023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:51:49.490378   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:51:49.490405   56023 main.go:141] libmachine: Checking connection to Docker...
	I0603 11:51:49.490413   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetURL
	I0603 11:51:49.491694   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Using libvirt version 6000000
	I0603 11:51:49.494020   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.494448   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.494479   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.494644   56023 main.go:141] libmachine: Docker is up and running!
	I0603 11:51:49.494662   56023 main.go:141] libmachine: Reticulating splines...
	I0603 11:51:49.494669   56023 client.go:171] duration metric: took 25.245927246s to LocalClient.Create
	I0603 11:51:49.494693   56023 start.go:167] duration metric: took 25.245993168s to libmachine.API.Create "kubernetes-upgrade-179482"
	I0603 11:51:49.494702   56023 start.go:293] postStartSetup for "kubernetes-upgrade-179482" (driver="kvm2")
	I0603 11:51:49.494714   56023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:51:49.494730   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:49.494959   56023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:51:49.494981   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.497089   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.497412   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.497443   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.497569   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.497784   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.497946   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.498084   56023 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:51:49.577086   56023 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:51:49.581686   56023 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:51:49.581710   56023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:51:49.581775   56023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:51:49.581876   56023 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:51:49.581989   56023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:51:49.591503   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:51:49.615787   56023 start.go:296] duration metric: took 121.073536ms for postStartSetup
	I0603 11:51:49.615826   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetConfigRaw
	I0603 11:51:49.616383   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetIP
	I0603 11:51:49.619013   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.619385   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.619408   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.619621   56023 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/config.json ...
	I0603 11:51:49.619804   56023 start.go:128] duration metric: took 25.391679819s to createHost
	I0603 11:51:49.619824   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.621877   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.622167   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.622186   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.622477   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.622645   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.622900   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.623092   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.623272   56023 main.go:141] libmachine: Using SSH client type: native
	I0603 11:51:49.623478   56023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0603 11:51:49.623494   56023 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 11:51:49.719411   56023 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717415509.685336505
	
	I0603 11:51:49.719435   56023 fix.go:216] guest clock: 1717415509.685336505
	I0603 11:51:49.719447   56023 fix.go:229] Guest: 2024-06-03 11:51:49.685336505 +0000 UTC Remote: 2024-06-03 11:51:49.619815155 +0000 UTC m=+71.451540611 (delta=65.52135ms)
	I0603 11:51:49.719470   56023 fix.go:200] guest clock delta is within tolerance: 65.52135ms
	I0603 11:51:49.719475   56023 start.go:83] releasing machines lock for "kubernetes-upgrade-179482", held for 25.491512813s
	I0603 11:51:49.719503   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:49.719781   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetIP
	I0603 11:51:49.722572   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.722950   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.722979   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.723136   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:49.723663   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:49.723859   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:51:49.723965   56023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:51:49.724018   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.724089   56023 ssh_runner.go:195] Run: cat /version.json
	I0603 11:51:49.724114   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:51:49.726646   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.726913   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.726943   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.726967   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.727129   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.727300   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.727340   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:49.727368   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:49.727456   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.727548   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:51:49.727628   56023 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:51:49.727697   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:51:49.727838   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:51:49.727977   56023 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:51:49.804729   56023 ssh_runner.go:195] Run: systemctl --version
	I0603 11:51:49.830176   56023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:51:49.992211   56023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:51:49.998485   56023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:51:49.998563   56023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:51:50.016108   56023 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 11:51:50.016136   56023 start.go:494] detecting cgroup driver to use...
	I0603 11:51:50.016200   56023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:51:50.032639   56023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:51:50.047634   56023 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:51:50.047703   56023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:51:50.063730   56023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:51:50.078002   56023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:51:50.187474   56023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:51:50.346117   56023 docker.go:233] disabling docker service ...
	I0603 11:51:50.346181   56023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:51:50.360351   56023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:51:50.373081   56023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:51:50.502025   56023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:51:50.620452   56023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:51:50.635008   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:51:50.654038   56023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 11:51:50.654087   56023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:51:50.664825   56023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:51:50.664876   56023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:51:50.675467   56023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:51:50.686088   56023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:51:50.696402   56023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:51:50.707682   56023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:51:50.716993   56023 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 11:51:50.717059   56023 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 11:51:50.734516   56023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:51:50.744118   56023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:51:50.867304   56023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:51:51.021736   56023 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:51:51.021831   56023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:51:51.027390   56023 start.go:562] Will wait 60s for crictl version
	I0603 11:51:51.027451   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:51.031802   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:51:51.082860   56023 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:51:51.082939   56023 ssh_runner.go:195] Run: crio --version
	I0603 11:51:51.111153   56023 ssh_runner.go:195] Run: crio --version
	I0603 11:51:51.146647   56023 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 11:51:51.148108   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetIP
	I0603 11:51:51.153151   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:51.153726   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:51:38 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:51:51.153761   56023 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:51:51.153982   56023 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 11:51:51.158625   56023 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:51:51.173386   56023 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-179482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-179482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:51:51.173475   56023 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 11:51:51.173527   56023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:51:51.207983   56023 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 11:51:51.208055   56023 ssh_runner.go:195] Run: which lz4
	I0603 11:51:51.212140   56023 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 11:51:51.217099   56023 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 11:51:51.217128   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 11:51:52.978026   56023 crio.go:462] duration metric: took 1.765923707s to copy over tarball
	I0603 11:51:52.978101   56023 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 11:51:55.563605   56023 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.585474204s)
	I0603 11:51:55.563634   56023 crio.go:469] duration metric: took 2.585580386s to extract the tarball
	I0603 11:51:55.563643   56023 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 11:51:55.612451   56023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:51:55.658039   56023 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 11:51:55.658071   56023 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 11:51:55.658157   56023 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:51:55.658194   56023 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:51:55.658228   56023 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:51:55.658265   56023 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 11:51:55.658218   56023 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:51:55.658273   56023 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 11:51:55.658241   56023 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:51:55.658157   56023 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:51:55.659457   56023 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:51:55.659862   56023 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:51:55.659867   56023 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:51:55.659877   56023 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 11:51:55.659877   56023 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:51:55.659862   56023 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:51:55.659910   56023 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 11:51:55.659869   56023 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:51:55.854836   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:51:55.861248   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:51:55.870740   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 11:51:55.880873   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:51:55.886682   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 11:51:55.920087   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 11:51:55.943961   56023 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 11:51:55.944001   56023 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:51:55.944035   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:55.945527   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:51:55.949535   56023 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 11:51:55.949573   56023 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:51:55.949611   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.010153   56023 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 11:51:56.010200   56023 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 11:51:56.010249   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.028103   56023 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 11:51:56.028149   56023 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:51:56.028197   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.032734   56023 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 11:51:56.032768   56023 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 11:51:56.032801   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.057205   56023 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 11:51:56.057252   56023 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:51:56.057267   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:51:56.057290   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.074922   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:51:56.074972   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 11:51:56.075019   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:51:56.075074   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 11:51:56.075114   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 11:51:56.075293   56023 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 11:51:56.075353   56023 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:51:56.075391   56023 ssh_runner.go:195] Run: which crictl
	I0603 11:51:56.166743   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 11:51:56.218097   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 11:51:56.218176   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 11:51:56.218207   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 11:51:56.218234   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 11:51:56.218209   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 11:51:56.218332   56023 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:51:56.249821   56023 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 11:51:56.604185   56023 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:51:56.753974   56023 cache_images.go:92] duration metric: took 1.095881383s to LoadCachedImages
	W0603 11:51:56.754079   56023 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0603 11:51:56.754104   56023 kubeadm.go:928] updating node { 192.168.72.223 8443 v1.20.0 crio true true} ...
	I0603 11:51:56.754235   56023 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-179482 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:51:56.754331   56023 ssh_runner.go:195] Run: crio config
	I0603 11:51:56.801449   56023 cni.go:84] Creating CNI manager for ""
	I0603 11:51:56.801475   56023 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:51:56.801487   56023 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:51:56.801511   56023 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-179482 NodeName:kubernetes-upgrade-179482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 11:51:56.801696   56023 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-179482"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:51:56.801770   56023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 11:51:56.812448   56023 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:51:56.812512   56023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 11:51:56.822157   56023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0603 11:51:56.838809   56023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:51:56.855449   56023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0603 11:51:56.872093   56023 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0603 11:51:56.876512   56023 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:51:56.892194   56023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:51:57.029642   56023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:51:57.049328   56023 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482 for IP: 192.168.72.223
	I0603 11:51:57.049356   56023 certs.go:194] generating shared ca certs ...
	I0603 11:51:57.049375   56023 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.049553   56023 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:51:57.049610   56023 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:51:57.049623   56023 certs.go:256] generating profile certs ...
	I0603 11:51:57.049686   56023 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.key
	I0603 11:51:57.049704   56023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.crt with IP's: []
	I0603 11:51:57.493681   56023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.crt ...
	I0603 11:51:57.493717   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.crt: {Name:mk90826535c68ebfe9a5e4c00fd60d61b349a3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.493869   56023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.key ...
	I0603 11:51:57.493883   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.key: {Name:mk43a62075cf5d2f641abb795124780e6f7e441a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.493957   56023 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key.c167f4fa
	I0603 11:51:57.493973   56023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt.c167f4fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.223]
	I0603 11:51:57.621078   56023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt.c167f4fa ...
	I0603 11:51:57.621109   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt.c167f4fa: {Name:mkf9ce9bbf3a988eedf16f2722533023768ca3d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.621290   56023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key.c167f4fa ...
	I0603 11:51:57.621311   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key.c167f4fa: {Name:mkb5a60ac922562219df587e8b5d50e1544992b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.621404   56023 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt.c167f4fa -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt
	I0603 11:51:57.621496   56023 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key.c167f4fa -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key
	I0603 11:51:57.621568   56023 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.key
	I0603 11:51:57.621589   56023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.crt with IP's: []
	I0603 11:51:57.730977   56023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.crt ...
	I0603 11:51:57.731007   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.crt: {Name:mkaf45f46b320467b7895b0d10845fec02ec0675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.731194   56023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.key ...
	I0603 11:51:57.731211   56023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.key: {Name:mk550a1344d963f930ff13209c9c83bf70fa03ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:51:57.731429   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:51:57.731513   56023 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:51:57.731528   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:51:57.731562   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:51:57.731597   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:51:57.731626   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:51:57.731677   56023 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:51:57.732469   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:51:57.764765   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:51:57.791702   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:51:57.819527   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:51:57.845792   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 11:51:57.872091   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 11:51:57.914988   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:51:57.955624   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 11:51:57.983100   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:51:58.010834   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:51:58.040653   56023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:51:58.068049   56023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:51:58.092213   56023 ssh_runner.go:195] Run: openssl version
	I0603 11:51:58.100145   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:51:58.115898   56023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:51:58.120759   56023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:51:58.120811   56023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:51:58.126867   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:51:58.137975   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:51:58.149141   56023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:51:58.153702   56023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:51:58.153751   56023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:51:58.159391   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:51:58.170137   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:51:58.181958   56023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:51:58.186721   56023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:51:58.186780   56023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:51:58.192788   56023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:51:58.204451   56023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:51:58.353460   56023 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 11:51:58.353520   56023 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-179482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-179482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:51:58.353612   56023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:51:58.353694   56023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:51:58.401748   56023 cri.go:89] found id: ""
	I0603 11:51:58.401825   56023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 11:51:58.413954   56023 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 11:51:58.424948   56023 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 11:51:58.435257   56023 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 11:51:58.435281   56023 kubeadm.go:156] found existing configuration files:
	
	I0603 11:51:58.435379   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 11:51:58.444990   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 11:51:58.445049   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 11:51:58.455522   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 11:51:58.465412   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 11:51:58.465469   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 11:51:58.475349   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 11:51:58.488611   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 11:51:58.488668   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 11:51:58.503428   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 11:51:58.515862   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 11:51:58.515925   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 11:51:58.527703   56023 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 11:51:58.684768   56023 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 11:51:58.684844   56023 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 11:51:58.872674   56023 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 11:51:58.872909   56023 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 11:51:58.873083   56023 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 11:51:59.063844   56023 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 11:51:59.121070   56023 out.go:204]   - Generating certificates and keys ...
	I0603 11:51:59.121187   56023 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 11:51:59.121306   56023 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 11:51:59.230586   56023 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 11:51:59.523110   56023 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 11:51:59.646457   56023 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 11:51:59.754200   56023 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 11:51:59.910450   56023 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 11:51:59.910619   56023 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	I0603 11:51:59.999381   56023 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 11:51:59.999682   56023 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	I0603 11:52:00.198603   56023 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 11:52:00.361221   56023 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 11:52:00.717276   56023 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 11:52:00.717652   56023 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 11:52:00.837963   56023 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 11:52:00.931125   56023 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 11:52:01.090171   56023 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 11:52:01.208373   56023 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 11:52:01.224463   56023 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 11:52:01.225565   56023 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 11:52:01.226245   56023 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 11:52:01.345431   56023 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 11:52:01.348336   56023 out.go:204]   - Booting up control plane ...
	I0603 11:52:01.348497   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 11:52:01.356850   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 11:52:01.358033   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 11:52:01.358915   56023 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 11:52:01.363324   56023 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 11:52:41.350690   56023 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 11:52:41.351265   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:52:41.351704   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:52:46.351972   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:52:46.352142   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:52:56.351750   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:52:56.352043   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:53:16.351765   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:53:16.351963   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:53:56.353377   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:53:56.353651   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:53:56.353664   56023 kubeadm.go:309] 
	I0603 11:53:56.353712   56023 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 11:53:56.353760   56023 kubeadm.go:309] 		timed out waiting for the condition
	I0603 11:53:56.353772   56023 kubeadm.go:309] 
	I0603 11:53:56.353811   56023 kubeadm.go:309] 	This error is likely caused by:
	I0603 11:53:56.353856   56023 kubeadm.go:309] 		- The kubelet is not running
	I0603 11:53:56.353987   56023 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 11:53:56.354000   56023 kubeadm.go:309] 
	I0603 11:53:56.354141   56023 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 11:53:56.354193   56023 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 11:53:56.354222   56023 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 11:53:56.354230   56023 kubeadm.go:309] 
	I0603 11:53:56.354326   56023 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 11:53:56.354395   56023 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 11:53:56.354402   56023 kubeadm.go:309] 
	I0603 11:53:56.354483   56023 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 11:53:56.354566   56023 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 11:53:56.354628   56023 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 11:53:56.354688   56023 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 11:53:56.354694   56023 kubeadm.go:309] 
	I0603 11:53:56.355574   56023 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 11:53:56.355668   56023 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 11:53:56.355730   56023 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 11:53:56.355893   56023 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179482 localhost] and IPs [192.168.72.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 11:53:56.355949   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 11:53:58.554084   56023 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.198101025s)
	I0603 11:53:58.554176   56023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:53:58.570233   56023 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 11:53:58.580516   56023 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 11:53:58.580540   56023 kubeadm.go:156] found existing configuration files:
	
	I0603 11:53:58.580586   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 11:53:58.591125   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 11:53:58.591228   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 11:53:58.603785   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 11:53:58.616125   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 11:53:58.616186   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 11:53:58.630542   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 11:53:58.640463   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 11:53:58.640532   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 11:53:58.650916   56023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 11:53:58.664451   56023 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 11:53:58.664522   56023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 11:53:58.678446   56023 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 11:53:58.808149   56023 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 11:53:58.808224   56023 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 11:53:59.015573   56023 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 11:53:59.015718   56023 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 11:53:59.015848   56023 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 11:53:59.320115   56023 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 11:53:59.321970   56023 out.go:204]   - Generating certificates and keys ...
	I0603 11:53:59.322075   56023 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 11:53:59.322145   56023 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 11:53:59.322227   56023 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 11:53:59.322296   56023 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 11:53:59.322375   56023 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 11:53:59.322429   56023 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 11:53:59.322492   56023 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 11:53:59.322573   56023 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 11:53:59.322655   56023 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 11:53:59.322747   56023 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 11:53:59.322787   56023 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 11:53:59.322846   56023 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 11:53:59.619271   56023 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 11:53:59.737284   56023 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 11:53:59.944289   56023 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 11:54:00.189963   56023 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 11:54:00.207973   56023 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 11:54:00.208505   56023 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 11:54:00.208724   56023 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 11:54:00.398850   56023 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 11:54:00.401498   56023 out.go:204]   - Booting up control plane ...
	I0603 11:54:00.401622   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 11:54:00.402469   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 11:54:00.404440   56023 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 11:54:00.406610   56023 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 11:54:00.410749   56023 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 11:54:40.409356   56023 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 11:54:40.410337   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:54:40.410519   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:54:45.411478   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:54:45.411758   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:54:55.411782   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:54:55.412083   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:55:15.412292   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:55:15.412579   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:55:55.413998   56023 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:55:55.414311   56023 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:55:55.414328   56023 kubeadm.go:309] 
	I0603 11:55:55.414383   56023 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 11:55:55.414464   56023 kubeadm.go:309] 		timed out waiting for the condition
	I0603 11:55:55.414474   56023 kubeadm.go:309] 
	I0603 11:55:55.414541   56023 kubeadm.go:309] 	This error is likely caused by:
	I0603 11:55:55.414588   56023 kubeadm.go:309] 		- The kubelet is not running
	I0603 11:55:55.414726   56023 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 11:55:55.414737   56023 kubeadm.go:309] 
	I0603 11:55:55.414887   56023 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 11:55:55.415070   56023 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 11:55:55.415149   56023 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 11:55:55.415159   56023 kubeadm.go:309] 
	I0603 11:55:55.415310   56023 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 11:55:55.415435   56023 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 11:55:55.415445   56023 kubeadm.go:309] 
	I0603 11:55:55.415617   56023 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 11:55:55.415746   56023 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 11:55:55.415856   56023 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 11:55:55.415919   56023 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 11:55:55.415929   56023 kubeadm.go:309] 
	I0603 11:55:55.416159   56023 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 11:55:55.416255   56023 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 11:55:55.416333   56023 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 11:55:55.416401   56023 kubeadm.go:393] duration metric: took 3m57.062886267s to StartCluster
	I0603 11:55:55.416438   56023 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 11:55:55.416498   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 11:55:55.463376   56023 cri.go:89] found id: ""
	I0603 11:55:55.463409   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.463420   56023 logs.go:278] No container was found matching "kube-apiserver"
	I0603 11:55:55.463429   56023 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 11:55:55.463500   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 11:55:55.499669   56023 cri.go:89] found id: ""
	I0603 11:55:55.499698   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.499708   56023 logs.go:278] No container was found matching "etcd"
	I0603 11:55:55.499714   56023 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 11:55:55.499766   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 11:55:55.537447   56023 cri.go:89] found id: ""
	I0603 11:55:55.537474   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.537485   56023 logs.go:278] No container was found matching "coredns"
	I0603 11:55:55.537493   56023 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 11:55:55.537556   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 11:55:55.574978   56023 cri.go:89] found id: ""
	I0603 11:55:55.575005   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.575015   56023 logs.go:278] No container was found matching "kube-scheduler"
	I0603 11:55:55.575023   56023 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 11:55:55.575107   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 11:55:55.607620   56023 cri.go:89] found id: ""
	I0603 11:55:55.607650   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.607662   56023 logs.go:278] No container was found matching "kube-proxy"
	I0603 11:55:55.607670   56023 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 11:55:55.607726   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 11:55:55.645542   56023 cri.go:89] found id: ""
	I0603 11:55:55.645566   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.645573   56023 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 11:55:55.645578   56023 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 11:55:55.645623   56023 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 11:55:55.683902   56023 cri.go:89] found id: ""
	I0603 11:55:55.683926   56023 logs.go:276] 0 containers: []
	W0603 11:55:55.683933   56023 logs.go:278] No container was found matching "kindnet"
	I0603 11:55:55.683941   56023 logs.go:123] Gathering logs for kubelet ...
	I0603 11:55:55.683953   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 11:55:55.745472   56023 logs.go:123] Gathering logs for dmesg ...
	I0603 11:55:55.745509   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 11:55:55.761138   56023 logs.go:123] Gathering logs for describe nodes ...
	I0603 11:55:55.761184   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 11:55:55.914532   56023 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 11:55:55.914556   56023 logs.go:123] Gathering logs for CRI-O ...
	I0603 11:55:55.914570   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 11:55:56.034595   56023 logs.go:123] Gathering logs for container status ...
	I0603 11:55:56.034631   56023 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 11:55:56.090344   56023 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 11:55:56.090395   56023 out.go:239] * 
	* 
	W0603 11:55:56.090459   56023 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 11:55:56.090487   56023 out.go:239] * 
	* 
	W0603 11:55:56.091577   56023 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 11:55:56.095530   56023 out.go:177] 
	W0603 11:55:56.096918   56023 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 11:55:56.096995   56023 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 11:55:56.097034   56023 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 11:55:56.098499   56023 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-179482
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-179482: (2.311039956s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-179482 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-179482 status --format={{.Host}}: exit status 7 (65.753067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.46418075s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-179482 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.160252ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-179482] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-179482
	    minikube start -p kubernetes-upgrade-179482 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1794822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-179482 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-179482 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (26.797150323s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-03 11:57:04.93485251 +0000 UTC m=+4724.839272416
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-179482 -n kubernetes-upgrade-179482
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-179482 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-179482 logs -n 25: (1.391579648s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991 sudo cat                | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991 sudo cat                | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991 sudo cat                | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| ssh     | -p bridge-034991 pgrep -a                            | bridge-034991             | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-034991                         | enable-default-cni-034991 | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC | 03 Jun 24 11:56 UTC |
	| start   | -p old-k8s-version-905554                            | old-k8s-version-905554    | jenkins | v1.33.1 | 03 Jun 24 11:56 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 11:56:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 11:56:59.954096   67501 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:56:59.954231   67501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:56:59.954242   67501 out.go:304] Setting ErrFile to fd 2...
	I0603 11:56:59.954249   67501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:56:59.954503   67501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:56:59.955232   67501 out.go:298] Setting JSON to false
	I0603 11:56:59.956626   67501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5965,"bootTime":1717409855,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:56:59.956704   67501 start.go:139] virtualization: kvm guest
	I0603 11:56:59.959087   67501 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:56:59.960889   67501 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:56:59.962159   67501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:56:59.960942   67501 notify.go:220] Checking for updates...
	I0603 11:56:59.964732   67501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:56:59.966070   67501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:56:59.967269   67501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:56:59.968428   67501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:56:59.970236   67501 config.go:182] Loaded profile config "bridge-034991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970340   67501 config.go:182] Loaded profile config "calico-034991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970420   67501 config.go:182] Loaded profile config "kubernetes-upgrade-179482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970520   67501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:57:00.018052   67501 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 11:57:00.019276   67501 start.go:297] selected driver: kvm2
	I0603 11:57:00.019304   67501 start.go:901] validating driver "kvm2" against <nil>
	I0603 11:57:00.019320   67501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:57:00.020249   67501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:57:00.020352   67501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:57:00.038824   67501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:57:00.038881   67501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 11:57:00.039212   67501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:57:00.039252   67501 cni.go:84] Creating CNI manager for ""
	I0603 11:57:00.039264   67501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:57:00.039274   67501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 11:57:00.039344   67501 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:57:00.039474   67501 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:57:00.041379   67501 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 11:57:00.043029   67501 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 11:57:00.043101   67501 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 11:57:00.043110   67501 cache.go:56] Caching tarball of preloaded images
	I0603 11:57:00.043191   67501 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:57:00.043199   67501 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 11:57:00.043298   67501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 11:57:00.043320   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json: {Name:mk53959f68545452763d1b73ef91e0947b64a6ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:00.043447   67501 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:57:00.043488   67501 start.go:364] duration metric: took 21.579µs to acquireMachinesLock for "old-k8s-version-905554"
	I0603 11:57:00.043503   67501 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:57:00.043559   67501 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 11:56:58.432244   66006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:56:58.448463   66006 api_server.go:72] duration metric: took 1.017119689s to wait for apiserver process to appear ...
	I0603 11:56:58.448485   66006 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:56:58.448506   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:01.330797   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 11:57:01.330830   66006 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 11:57:01.330847   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:01.419638   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:57:01.419671   66006 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:57:01.448838   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:01.458074   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:57:01.458097   66006 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:57:01.948622   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:01.955919   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:57:01.955945   66006 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:57:02.449163   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:02.460619   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 11:57:02.460649   66006 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 11:57:02.949319   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:02.954840   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0603 11:57:02.963802   66006 api_server.go:141] control plane version: v1.30.1
	I0603 11:57:02.963828   66006 api_server.go:131] duration metric: took 4.515335663s to wait for apiserver health ...
	I0603 11:57:02.963838   66006 cni.go:84] Creating CNI manager for ""
	I0603 11:57:02.963846   66006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:57:02.965679   66006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 11:57:02.967531   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 11:57:02.982918   66006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 11:57:03.008037   66006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:57:03.018286   66006 system_pods.go:59] 5 kube-system pods found
	I0603 11:57:03.018394   66006 system_pods.go:61] "etcd-kubernetes-upgrade-179482" [dbc93967-33bc-48aa-be18-c8ab7fe2b6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 11:57:03.018417   66006 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-179482" [241d5b33-2c4a-4044-88a8-236687668808] Running
	I0603 11:57:03.018450   66006 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-179482" [a4ce1bb6-24bb-4615-a2f0-3e6765be5584] Running
	I0603 11:57:03.018466   66006 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-179482" [3f0472ee-54e7-4ecd-a859-a0816e171d3b] Running
	I0603 11:57:03.018482   66006 system_pods.go:61] "storage-provisioner" [7dc14950-e348-4cc0-8b46-533e6ff70bb5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0603 11:57:03.018506   66006 system_pods.go:74] duration metric: took 10.442565ms to wait for pod list to return data ...
	I0603 11:57:03.018540   66006 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:57:03.022895   66006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:57:03.022968   66006 node_conditions.go:123] node cpu capacity is 2
	I0603 11:57:03.022994   66006 node_conditions.go:105] duration metric: took 4.438329ms to run NodePressure ...
	I0603 11:57:03.023022   66006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 11:57:03.509715   66006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 11:57:03.525736   66006 ops.go:34] apiserver oom_adj: -16
	I0603 11:57:03.525758   66006 kubeadm.go:591] duration metric: took 9.010327139s to restartPrimaryControlPlane
	I0603 11:57:03.525768   66006 kubeadm.go:393] duration metric: took 9.112129118s to StartCluster
	I0603 11:57:03.525788   66006 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:03.525851   66006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:57:03.527019   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:03.527273   66006 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:57:03.528928   66006 out.go:177] * Verifying Kubernetes components...
	I0603 11:57:03.527586   66006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 11:57:03.527809   66006 config.go:182] Loaded profile config "kubernetes-upgrade-179482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:57:03.530373   66006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:57:03.530451   66006 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-179482"
	I0603 11:57:03.530480   66006 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-179482"
	W0603 11:57:03.530489   66006 addons.go:243] addon storage-provisioner should already be in state true
	I0603 11:57:03.530514   66006 host.go:66] Checking if "kubernetes-upgrade-179482" exists ...
	I0603 11:57:03.530914   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:03.530931   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:03.531147   66006 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-179482"
	I0603 11:57:03.531179   66006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-179482"
	I0603 11:57:03.531560   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:03.531575   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:03.561338   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0603 11:57:03.564390   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43841
	I0603 11:57:03.564523   66006 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:03.564956   66006 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:03.565531   66006 main.go:141] libmachine: Using API Version  1
	I0603 11:57:03.565550   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:03.565666   66006 main.go:141] libmachine: Using API Version  1
	I0603 11:57:03.565675   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:03.566049   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:03.566468   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:03.566683   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetState
	I0603 11:57:03.567540   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:03.567571   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:03.574039   66006 kapi.go:59] client config for kubernetes-upgrade-179482: &rest.Config{Host:"https://192.168.72.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.crt", KeyFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kubernetes-upgrade-179482/client.key", CAFile:"/home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 11:57:03.574343   66006 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-179482"
	W0603 11:57:03.574360   66006 addons.go:243] addon default-storageclass should already be in state true
	I0603 11:57:03.574388   66006 host.go:66] Checking if "kubernetes-upgrade-179482" exists ...
	I0603 11:57:03.574976   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:03.575008   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:03.595167   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0603 11:57:03.595177   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I0603 11:57:03.599193   66006 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:03.599264   66006 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:03.599777   66006 main.go:141] libmachine: Using API Version  1
	I0603 11:57:03.599794   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:03.599914   66006 main.go:141] libmachine: Using API Version  1
	I0603 11:57:03.599923   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:03.600212   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:03.600273   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:03.600733   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:03.600764   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:03.601202   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetState
	I0603 11:57:03.603327   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:57:03.609358   66006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:57:03.610840   66006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 11:57:03.610858   66006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 11:57:03.610876   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:57:03.614107   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:57:03.614683   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:56:10 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:57:03.614802   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:57:03.615203   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:57:03.615437   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:57:03.615592   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:57:03.615739   66006 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:57:03.623129   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0603 11:57:03.627120   66006 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:03.627873   66006 main.go:141] libmachine: Using API Version  1
	I0603 11:57:03.627890   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:03.628272   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:03.628453   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetState
	I0603 11:57:03.630214   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .DriverName
	I0603 11:57:03.631342   66006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 11:57:03.631365   66006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 11:57:03.631383   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHHostname
	I0603 11:57:03.634740   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:57:03.634823   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:c9:d2", ip: ""} in network mk-kubernetes-upgrade-179482: {Iface:virbr2 ExpiryTime:2024-06-03 12:56:10 +0000 UTC Type:0 Mac:52:54:00:4e:c9:d2 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:kubernetes-upgrade-179482 Clientid:01:52:54:00:4e:c9:d2}
	I0603 11:57:03.634848   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | domain kubernetes-upgrade-179482 has defined IP address 192.168.72.223 and MAC address 52:54:00:4e:c9:d2 in network mk-kubernetes-upgrade-179482
	I0603 11:57:03.638783   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHPort
	I0603 11:57:03.638987   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHKeyPath
	I0603 11:57:03.639167   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .GetSSHUsername
	I0603 11:57:03.639354   66006 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/kubernetes-upgrade-179482/id_rsa Username:docker}
	I0603 11:57:03.821920   66006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:57:03.842733   66006 api_server.go:52] waiting for apiserver process to appear ...
	I0603 11:57:03.842812   66006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:57:03.858261   66006 api_server.go:72] duration metric: took 330.95724ms to wait for apiserver process to appear ...
	I0603 11:57:03.858289   66006 api_server.go:88] waiting for apiserver healthz status ...
	I0603 11:57:03.858310   66006 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0603 11:57:03.864977   66006 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0603 11:57:03.866000   66006 api_server.go:141] control plane version: v1.30.1
	I0603 11:57:03.866024   66006 api_server.go:131] duration metric: took 7.726877ms to wait for apiserver health ...
	I0603 11:57:03.866034   66006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 11:57:03.873328   66006 system_pods.go:59] 5 kube-system pods found
	I0603 11:57:03.873362   66006 system_pods.go:61] "etcd-kubernetes-upgrade-179482" [dbc93967-33bc-48aa-be18-c8ab7fe2b6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 11:57:03.873371   66006 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-179482" [241d5b33-2c4a-4044-88a8-236687668808] Running
	I0603 11:57:03.873381   66006 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-179482" [a4ce1bb6-24bb-4615-a2f0-3e6765be5584] Running
	I0603 11:57:03.873405   66006 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-179482" [3f0472ee-54e7-4ecd-a859-a0816e171d3b] Running
	I0603 11:57:03.873439   66006 system_pods.go:61] "storage-provisioner" [7dc14950-e348-4cc0-8b46-533e6ff70bb5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0603 11:57:03.873451   66006 system_pods.go:74] duration metric: took 7.410715ms to wait for pod list to return data ...
	I0603 11:57:03.873478   66006 kubeadm.go:576] duration metric: took 346.165358ms to wait for: map[apiserver:true system_pods:true]
	I0603 11:57:03.873499   66006 node_conditions.go:102] verifying NodePressure condition ...
	I0603 11:57:03.876412   66006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 11:57:03.876437   66006 node_conditions.go:123] node cpu capacity is 2
	I0603 11:57:03.876447   66006 node_conditions.go:105] duration metric: took 2.942262ms to run NodePressure ...
	I0603 11:57:03.876460   66006 start.go:240] waiting for startup goroutines ...
	I0603 11:57:03.959822   66006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 11:57:03.979624   66006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 11:57:04.851285   66006 main.go:141] libmachine: Making call to close driver server
	I0603 11:57:04.851317   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Close
	I0603 11:57:04.851387   66006 main.go:141] libmachine: Making call to close driver server
	I0603 11:57:04.851412   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Close
	I0603 11:57:04.851777   66006 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:57:04.851785   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Closing plugin on server side
	I0603 11:57:04.851787   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Closing plugin on server side
	I0603 11:57:04.851793   66006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:57:04.851803   66006 main.go:141] libmachine: Making call to close driver server
	I0603 11:57:04.851816   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Close
	I0603 11:57:04.851915   66006 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:57:04.851941   66006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:57:04.851950   66006 main.go:141] libmachine: Making call to close driver server
	I0603 11:57:04.851959   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Close
	I0603 11:57:04.852028   66006 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:57:04.852049   66006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:57:04.852067   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Closing plugin on server side
	I0603 11:57:04.853848   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Closing plugin on server side
	I0603 11:57:04.853853   66006 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:57:04.853868   66006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:57:04.860346   66006 main.go:141] libmachine: Making call to close driver server
	I0603 11:57:04.860366   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) Calling .Close
	I0603 11:57:04.860589   66006 main.go:141] libmachine: Successfully made call to close driver server
	I0603 11:57:04.860606   66006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 11:57:04.860630   66006 main.go:141] libmachine: (kubernetes-upgrade-179482) DBG | Closing plugin on server side
	I0603 11:57:04.863231   66006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 11:57:04.864512   66006 addons.go:510] duration metric: took 1.336925854s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 11:57:04.864564   66006 start.go:245] waiting for cluster config update ...
	I0603 11:57:04.864578   66006 start.go:254] writing updated cluster config ...
	I0603 11:57:04.864848   66006 ssh_runner.go:195] Run: rm -f paused
	I0603 11:57:04.917305   66006 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 11:57:04.919312   66006 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-179482" cluster and "default" namespace by default
	I0603 11:57:00.045879   67501 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 11:57:00.046050   67501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:00.046102   67501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:00.064746   67501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0603 11:57:00.065231   67501 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:00.065884   67501 main.go:141] libmachine: Using API Version  1
	I0603 11:57:00.065911   67501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:00.066266   67501 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:00.066472   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 11:57:00.066654   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:00.066901   67501 start.go:159] libmachine.API.Create for "old-k8s-version-905554" (driver="kvm2")
	I0603 11:57:00.066927   67501 client.go:168] LocalClient.Create starting
	I0603 11:57:00.066960   67501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 11:57:00.067005   67501 main.go:141] libmachine: Decoding PEM data...
	I0603 11:57:00.067024   67501 main.go:141] libmachine: Parsing certificate...
	I0603 11:57:00.067116   67501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 11:57:00.067148   67501 main.go:141] libmachine: Decoding PEM data...
	I0603 11:57:00.067167   67501 main.go:141] libmachine: Parsing certificate...
	I0603 11:57:00.067194   67501 main.go:141] libmachine: Running pre-create checks...
	I0603 11:57:00.067206   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .PreCreateCheck
	I0603 11:57:00.067647   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 11:57:00.068172   67501 main.go:141] libmachine: Creating machine...
	I0603 11:57:00.068190   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .Create
	I0603 11:57:00.068748   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating KVM machine...
	I0603 11:57:00.069870   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found existing default KVM network
	I0603 11:57:00.071931   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.071636   67523 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015920}
	I0603 11:57:00.071950   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | created network xml: 
	I0603 11:57:00.071962   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | <network>
	I0603 11:57:00.071971   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <name>mk-old-k8s-version-905554</name>
	I0603 11:57:00.071981   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <dns enable='no'/>
	I0603 11:57:00.071988   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   
	I0603 11:57:00.071996   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 11:57:00.072009   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |     <dhcp>
	I0603 11:57:00.072016   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 11:57:00.072023   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |     </dhcp>
	I0603 11:57:00.072035   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   </ip>
	I0603 11:57:00.072042   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   
	I0603 11:57:00.072048   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | </network>
	I0603 11:57:00.072057   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | 
	I0603 11:57:00.076916   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | trying to create private KVM network mk-old-k8s-version-905554 192.168.39.0/24...
	I0603 11:57:00.183767   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 ...
	I0603 11:57:00.183808   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | private KVM network mk-old-k8s-version-905554 192.168.39.0/24 created
	I0603 11:57:00.183835   67501 main.go:141] libmachine: (old-k8s-version-905554) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 11:57:00.183848   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.182104   67523 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:57:00.183865   67501 main.go:141] libmachine: (old-k8s-version-905554) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 11:57:00.476438   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.476249   67523 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa...
	I0603 11:57:00.619353   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.619208   67523 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/old-k8s-version-905554.rawdisk...
	I0603 11:57:00.619382   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Writing magic tar header
	I0603 11:57:00.619405   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Writing SSH key tar header
	I0603 11:57:00.619418   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.619371   67523 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 ...
	I0603 11:57:00.619557   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554
	I0603 11:57:00.619594   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 (perms=drwx------)
	I0603 11:57:00.619606   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 11:57:00.619624   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:57:00.619637   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 11:57:00.619651   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 11:57:00.619659   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins
	I0603 11:57:00.619672   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home
	I0603 11:57:00.619681   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Skipping /home - not owner
	I0603 11:57:00.619693   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 11:57:00.619704   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 11:57:00.619720   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 11:57:00.619730   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 11:57:00.619743   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 11:57:00.619751   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 11:57:00.621020   67501 main.go:141] libmachine: (old-k8s-version-905554) define libvirt domain using xml: 
	I0603 11:57:00.621036   67501 main.go:141] libmachine: (old-k8s-version-905554) <domain type='kvm'>
	I0603 11:57:00.621047   67501 main.go:141] libmachine: (old-k8s-version-905554)   <name>old-k8s-version-905554</name>
	I0603 11:57:00.621055   67501 main.go:141] libmachine: (old-k8s-version-905554)   <memory unit='MiB'>2200</memory>
	I0603 11:57:00.621064   67501 main.go:141] libmachine: (old-k8s-version-905554)   <vcpu>2</vcpu>
	I0603 11:57:00.621075   67501 main.go:141] libmachine: (old-k8s-version-905554)   <features>
	I0603 11:57:00.621083   67501 main.go:141] libmachine: (old-k8s-version-905554)     <acpi/>
	I0603 11:57:00.621090   67501 main.go:141] libmachine: (old-k8s-version-905554)     <apic/>
	I0603 11:57:00.621099   67501 main.go:141] libmachine: (old-k8s-version-905554)     <pae/>
	I0603 11:57:00.621118   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621139   67501 main.go:141] libmachine: (old-k8s-version-905554)   </features>
	I0603 11:57:00.621149   67501 main.go:141] libmachine: (old-k8s-version-905554)   <cpu mode='host-passthrough'>
	I0603 11:57:00.621156   67501 main.go:141] libmachine: (old-k8s-version-905554)   
	I0603 11:57:00.621162   67501 main.go:141] libmachine: (old-k8s-version-905554)   </cpu>
	I0603 11:57:00.621170   67501 main.go:141] libmachine: (old-k8s-version-905554)   <os>
	I0603 11:57:00.621178   67501 main.go:141] libmachine: (old-k8s-version-905554)     <type>hvm</type>
	I0603 11:57:00.621190   67501 main.go:141] libmachine: (old-k8s-version-905554)     <boot dev='cdrom'/>
	I0603 11:57:00.621200   67501 main.go:141] libmachine: (old-k8s-version-905554)     <boot dev='hd'/>
	I0603 11:57:00.621210   67501 main.go:141] libmachine: (old-k8s-version-905554)     <bootmenu enable='no'/>
	I0603 11:57:00.621219   67501 main.go:141] libmachine: (old-k8s-version-905554)   </os>
	I0603 11:57:00.621228   67501 main.go:141] libmachine: (old-k8s-version-905554)   <devices>
	I0603 11:57:00.621239   67501 main.go:141] libmachine: (old-k8s-version-905554)     <disk type='file' device='cdrom'>
	I0603 11:57:00.621257   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/boot2docker.iso'/>
	I0603 11:57:00.621269   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target dev='hdc' bus='scsi'/>
	I0603 11:57:00.621281   67501 main.go:141] libmachine: (old-k8s-version-905554)       <readonly/>
	I0603 11:57:00.621296   67501 main.go:141] libmachine: (old-k8s-version-905554)     </disk>
	I0603 11:57:00.621309   67501 main.go:141] libmachine: (old-k8s-version-905554)     <disk type='file' device='disk'>
	I0603 11:57:00.621322   67501 main.go:141] libmachine: (old-k8s-version-905554)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 11:57:00.621336   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/old-k8s-version-905554.rawdisk'/>
	I0603 11:57:00.621344   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target dev='hda' bus='virtio'/>
	I0603 11:57:00.621353   67501 main.go:141] libmachine: (old-k8s-version-905554)     </disk>
	I0603 11:57:00.621367   67501 main.go:141] libmachine: (old-k8s-version-905554)     <interface type='network'>
	I0603 11:57:00.621380   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source network='mk-old-k8s-version-905554'/>
	I0603 11:57:00.621391   67501 main.go:141] libmachine: (old-k8s-version-905554)       <model type='virtio'/>
	I0603 11:57:00.621402   67501 main.go:141] libmachine: (old-k8s-version-905554)     </interface>
	I0603 11:57:00.621410   67501 main.go:141] libmachine: (old-k8s-version-905554)     <interface type='network'>
	I0603 11:57:00.621419   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source network='default'/>
	I0603 11:57:00.621429   67501 main.go:141] libmachine: (old-k8s-version-905554)       <model type='virtio'/>
	I0603 11:57:00.621441   67501 main.go:141] libmachine: (old-k8s-version-905554)     </interface>
	I0603 11:57:00.621450   67501 main.go:141] libmachine: (old-k8s-version-905554)     <serial type='pty'>
	I0603 11:57:00.621463   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target port='0'/>
	I0603 11:57:00.621473   67501 main.go:141] libmachine: (old-k8s-version-905554)     </serial>
	I0603 11:57:00.621485   67501 main.go:141] libmachine: (old-k8s-version-905554)     <console type='pty'>
	I0603 11:57:00.621493   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target type='serial' port='0'/>
	I0603 11:57:00.621501   67501 main.go:141] libmachine: (old-k8s-version-905554)     </console>
	I0603 11:57:00.621509   67501 main.go:141] libmachine: (old-k8s-version-905554)     <rng model='virtio'>
	I0603 11:57:00.621525   67501 main.go:141] libmachine: (old-k8s-version-905554)       <backend model='random'>/dev/random</backend>
	I0603 11:57:00.621535   67501 main.go:141] libmachine: (old-k8s-version-905554)     </rng>
	I0603 11:57:00.621544   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621553   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621562   67501 main.go:141] libmachine: (old-k8s-version-905554)   </devices>
	I0603 11:57:00.621572   67501 main.go:141] libmachine: (old-k8s-version-905554) </domain>
	I0603 11:57:00.621582   67501 main.go:141] libmachine: (old-k8s-version-905554) 
	I0603 11:57:00.626445   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:56:c2:3d in network default
	I0603 11:57:00.627310   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 11:57:00.627333   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:00.628331   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 11:57:00.628780   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 11:57:00.629459   67501 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 11:57:00.630489   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 11:57:02.202982   67501 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 11:57:02.204116   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.204877   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.205060   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.204976   67523 retry.go:31] will retry after 222.199907ms: waiting for machine to come up
	I0603 11:57:02.428791   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.429461   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.429490   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.429419   67523 retry.go:31] will retry after 327.649197ms: waiting for machine to come up
	I0603 11:57:02.759133   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.759765   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.759794   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.759718   67523 retry.go:31] will retry after 314.31801ms: waiting for machine to come up
	I0603 11:57:03.075988   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:03.076794   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:03.076825   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:03.076751   67523 retry.go:31] will retry after 471.996565ms: waiting for machine to come up
	I0603 11:57:03.550773   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:03.551407   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:03.551429   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:03.551338   67523 retry.go:31] will retry after 743.277114ms: waiting for machine to come up
	I0603 11:57:04.296512   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:04.297221   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:04.297246   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:04.297157   67523 retry.go:31] will retry after 787.192028ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.723765217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415825723736860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7fd9ae0-fb91-4d6a-ac25-93fbd1a4e471 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.725027016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=605552e4-9d60-4794-a998-d94d0c10f3bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.725083836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=605552e4-9d60-4794-a998-d94d0c10f3bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.725311050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b37cc03619ce1659bf28b93b5e9eb3f5baa5bb2c39a16e51ba33325c202c41,PodSandboxId:9d87bf6a937263f35bf5d9436fa8c992e70d1e910895b7c389400c7ab78e0623,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717415817916288275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e5c0859b78be2bffa650790aa779662748efc4c18b862e2583d4cea4920bef,PodSandboxId:9247353b9c4c6e9a0e0be78445707ab7ed875989e0277100fa368ea485e098ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717415817938414778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbc129036d45553bb48a38cc803ab2c03daf6d58d6266d82554816dea799410,PodSandboxId:c330323a434217ab14dbd42ba907dfacd0816836e9ff1a37703ca347a7395073,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717415817901107112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b3f18bb288b05884a88e5395ecc88a95d46ebdf3666af459257ab95996ad15,PodSandboxId:c763ca67712f3b14c7d2f00ea306651fffab40ab20c4a646f57159d9f2b73ea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717415817924979703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2,PodSandboxId:84b22cd62bf32f7a5b2cf8d30614d32e0be4a04d6e006ba0a52798e404d30989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717415810186722076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65,PodSandboxId:b439cb841e927f5592b7adbfab6873dc09fb7a03ef13c9b9c068d55a8adf01b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717415810233738585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2,PodSandboxId:a52ab030206724c108f1d08af0fea48f4e568b274f0ab37812a2360f02196cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717415810139340375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7,PodSandboxId:86334d6a2538ef9e8c5b997b7351387c89dc0e094c5b024cb1e866bc8c16b24a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717415810073872159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=605552e4-9d60-4794-a998-d94d0c10f3bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.797054473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8027bef-c9f9-434f-8329-6b71f5350619 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.797173503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8027bef-c9f9-434f-8329-6b71f5350619 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.799149603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=135062a9-942c-4431-b76c-4acbc18d7443 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.799841134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415825799803573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=135062a9-942c-4431-b76c-4acbc18d7443 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.801667180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e021d728-1666-468f-944a-9d224b3f2cba name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.801757952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e021d728-1666-468f-944a-9d224b3f2cba name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.802091462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b37cc03619ce1659bf28b93b5e9eb3f5baa5bb2c39a16e51ba33325c202c41,PodSandboxId:9d87bf6a937263f35bf5d9436fa8c992e70d1e910895b7c389400c7ab78e0623,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717415817916288275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e5c0859b78be2bffa650790aa779662748efc4c18b862e2583d4cea4920bef,PodSandboxId:9247353b9c4c6e9a0e0be78445707ab7ed875989e0277100fa368ea485e098ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717415817938414778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbc129036d45553bb48a38cc803ab2c03daf6d58d6266d82554816dea799410,PodSandboxId:c330323a434217ab14dbd42ba907dfacd0816836e9ff1a37703ca347a7395073,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717415817901107112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b3f18bb288b05884a88e5395ecc88a95d46ebdf3666af459257ab95996ad15,PodSandboxId:c763ca67712f3b14c7d2f00ea306651fffab40ab20c4a646f57159d9f2b73ea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717415817924979703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2,PodSandboxId:84b22cd62bf32f7a5b2cf8d30614d32e0be4a04d6e006ba0a52798e404d30989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717415810186722076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65,PodSandboxId:b439cb841e927f5592b7adbfab6873dc09fb7a03ef13c9b9c068d55a8adf01b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717415810233738585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2,PodSandboxId:a52ab030206724c108f1d08af0fea48f4e568b274f0ab37812a2360f02196cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717415810139340375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7,PodSandboxId:86334d6a2538ef9e8c5b997b7351387c89dc0e094c5b024cb1e866bc8c16b24a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717415810073872159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e021d728-1666-468f-944a-9d224b3f2cba name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.896165680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0940708-c743-4057-af8e-e1022caa7003 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.896338863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0940708-c743-4057-af8e-e1022caa7003 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.898542900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e9c3c9d-4ccf-416a-92a8-4c01d51e01fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.899032992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415825899004580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e9c3c9d-4ccf-416a-92a8-4c01d51e01fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.901540865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32087bb1-b5c0-45e7-b04e-de73c8f08ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.901593078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32087bb1-b5c0-45e7-b04e-de73c8f08ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.901777729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b37cc03619ce1659bf28b93b5e9eb3f5baa5bb2c39a16e51ba33325c202c41,PodSandboxId:9d87bf6a937263f35bf5d9436fa8c992e70d1e910895b7c389400c7ab78e0623,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717415817916288275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e5c0859b78be2bffa650790aa779662748efc4c18b862e2583d4cea4920bef,PodSandboxId:9247353b9c4c6e9a0e0be78445707ab7ed875989e0277100fa368ea485e098ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717415817938414778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbc129036d45553bb48a38cc803ab2c03daf6d58d6266d82554816dea799410,PodSandboxId:c330323a434217ab14dbd42ba907dfacd0816836e9ff1a37703ca347a7395073,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717415817901107112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b3f18bb288b05884a88e5395ecc88a95d46ebdf3666af459257ab95996ad15,PodSandboxId:c763ca67712f3b14c7d2f00ea306651fffab40ab20c4a646f57159d9f2b73ea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717415817924979703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2,PodSandboxId:84b22cd62bf32f7a5b2cf8d30614d32e0be4a04d6e006ba0a52798e404d30989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717415810186722076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65,PodSandboxId:b439cb841e927f5592b7adbfab6873dc09fb7a03ef13c9b9c068d55a8adf01b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717415810233738585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2,PodSandboxId:a52ab030206724c108f1d08af0fea48f4e568b274f0ab37812a2360f02196cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717415810139340375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7,PodSandboxId:86334d6a2538ef9e8c5b997b7351387c89dc0e094c5b024cb1e866bc8c16b24a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717415810073872159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32087bb1-b5c0-45e7-b04e-de73c8f08ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.953300527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e39851af-0509-4000-a86a-2ba47a2781b4 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.953452200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e39851af-0509-4000-a86a-2ba47a2781b4 name=/runtime.v1.RuntimeService/Version
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.954779614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71fec56f-fd89-4736-8436-a8cb67ba4b85 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.955173465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717415825955151769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71fec56f-fd89-4736-8436-a8cb67ba4b85 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.956105090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccbf5787-dea9-437e-a1cf-6fe40ba505b0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.956159946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccbf5787-dea9-437e-a1cf-6fe40ba505b0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 11:57:05 kubernetes-upgrade-179482 crio[1877]: time="2024-06-03 11:57:05.956337943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b37cc03619ce1659bf28b93b5e9eb3f5baa5bb2c39a16e51ba33325c202c41,PodSandboxId:9d87bf6a937263f35bf5d9436fa8c992e70d1e910895b7c389400c7ab78e0623,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717415817916288275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e5c0859b78be2bffa650790aa779662748efc4c18b862e2583d4cea4920bef,PodSandboxId:9247353b9c4c6e9a0e0be78445707ab7ed875989e0277100fa368ea485e098ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717415817938414778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbc129036d45553bb48a38cc803ab2c03daf6d58d6266d82554816dea799410,PodSandboxId:c330323a434217ab14dbd42ba907dfacd0816836e9ff1a37703ca347a7395073,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717415817901107112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b3f18bb288b05884a88e5395ecc88a95d46ebdf3666af459257ab95996ad15,PodSandboxId:c763ca67712f3b14c7d2f00ea306651fffab40ab20c4a646f57159d9f2b73ea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717415817924979703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2,PodSandboxId:84b22cd62bf32f7a5b2cf8d30614d32e0be4a04d6e006ba0a52798e404d30989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717415810186722076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232830eca4867e0e52b511b9adafaac,},Annotations:map[string]string{io.kubernetes.container.hash: 42021fc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65,PodSandboxId:b439cb841e927f5592b7adbfab6873dc09fb7a03ef13c9b9c068d55a8adf01b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717415810233738585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef691adf19f0315ffcdacd8a25d59a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b22c24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2,PodSandboxId:a52ab030206724c108f1d08af0fea48f4e568b274f0ab37812a2360f02196cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717415810139340375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69b4e624ea65f29475320c9ae3f637e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7,PodSandboxId:86334d6a2538ef9e8c5b997b7351387c89dc0e094c5b024cb1e866bc8c16b24a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717415810073872159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0fe3a4c0d99ebb3d5edb54763360dd,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccbf5787-dea9-437e-a1cf-6fe40ba505b0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b0e5c0859b78b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   8 seconds ago       Running             kube-controller-manager   2                   9247353b9c4c6       kube-controller-manager-kubernetes-upgrade-179482
	65b3f18bb288b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   8 seconds ago       Running             kube-apiserver            2                   c763ca67712f3       kube-apiserver-kubernetes-upgrade-179482
	01b37cc03619c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   9d87bf6a93726       etcd-kubernetes-upgrade-179482
	7cbc129036d45       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   8 seconds ago       Running             kube-scheduler            2                   c330323a43421       kube-scheduler-kubernetes-upgrade-179482
	496068f2ad5f0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 seconds ago      Exited              etcd                      1                   b439cb841e927       etcd-kubernetes-upgrade-179482
	2f3943e07ab26       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   15 seconds ago      Exited              kube-apiserver            1                   84b22cd62bf32       kube-apiserver-kubernetes-upgrade-179482
	9a4b9e58c5031       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   15 seconds ago      Exited              kube-controller-manager   1                   a52ab03020672       kube-controller-manager-kubernetes-upgrade-179482
	b6b3abe451698       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   15 seconds ago      Exited              kube-scheduler            1                   86334d6a2538e       kube-scheduler-kubernetes-upgrade-179482
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-179482
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-179482
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:56:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-179482
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:57:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:57:01 +0000   Mon, 03 Jun 2024 11:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:57:01 +0000   Mon, 03 Jun 2024 11:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:57:01 +0000   Mon, 03 Jun 2024 11:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:57:01 +0000   Mon, 03 Jun 2024 11:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.223
	  Hostname:    kubernetes-upgrade-179482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a65310c9f854314a595fc57a940c49b
	  System UUID:                3a65310c-9f85-4314-a595-fc57a940c49b
	  Boot ID:                    389da376-a3f5-4a1c-aebe-3a2d00aecb37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-179482                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 kube-apiserver-kubernetes-upgrade-179482             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-179482    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-179482             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 36s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-179482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +2.477128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.904080] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.070023] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058800] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.176656] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.156356] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.299662] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.546943] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.062416] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.270539] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +6.986789] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.101470] kauditd_printk_skb: 97 callbacks suppressed
	[ +12.569288] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.279556] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.238828] systemd-fstab-generator[1806]: Ignoring "noauto" option for root device
	[  +0.245116] systemd-fstab-generator[1820]: Ignoring "noauto" option for root device
	[  +0.187348] systemd-fstab-generator[1832]: Ignoring "noauto" option for root device
	[  +0.466179] systemd-fstab-generator[1863]: Ignoring "noauto" option for root device
	[  +1.082295] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +3.766718] systemd-fstab-generator[2316]: Ignoring "noauto" option for root device
	[  +0.079930] kauditd_printk_skb: 186 callbacks suppressed
	[Jun 3 11:57] systemd-fstab-generator[2594]: Ignoring "noauto" option for root device
	[  +0.149045] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [01b37cc03619ce1659bf28b93b5e9eb3f5baa5bb2c39a16e51ba33325c202c41] <==
	{"level":"info","ts":"2024-06-03T11:56:58.469884Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:56:58.46991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:56:58.479545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 switched to configuration voters=(5796789181283545943)"}
	{"level":"info","ts":"2024-06-03T11:56:58.479675Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d0d4b5aa9c0518f1","local-member-id":"5072550c343bb357","added-peer-id":"5072550c343bb357","added-peer-peer-urls":["https://192.168.72.223:2380"]}
	{"level":"info","ts":"2024-06-03T11:56:58.479841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0d4b5aa9c0518f1","local-member-id":"5072550c343bb357","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:56:58.479925Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:56:58.515969Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T11:56:58.522512Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5072550c343bb357","initial-advertise-peer-urls":["https://192.168.72.223:2380"],"listen-peer-urls":["https://192.168.72.223:2380"],"advertise-client-urls":["https://192.168.72.223:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.223:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T11:56:58.522708Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T11:56:58.521575Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-06-03T11:56:58.523013Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-06-03T11:56:59.513504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:59.513657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:59.513717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgPreVoteResp from 5072550c343bb357 at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:59.51376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became candidate at term 4"}
	{"level":"info","ts":"2024-06-03T11:56:59.513793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgVoteResp from 5072550c343bb357 at term 4"}
	{"level":"info","ts":"2024-06-03T11:56:59.513828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became leader at term 4"}
	{"level":"info","ts":"2024-06-03T11:56:59.51386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5072550c343bb357 elected leader 5072550c343bb357 at term 4"}
	{"level":"info","ts":"2024-06-03T11:56:59.521716Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5072550c343bb357","local-member-attributes":"{Name:kubernetes-upgrade-179482 ClientURLs:[https://192.168.72.223:2379]}","request-path":"/0/members/5072550c343bb357/attributes","cluster-id":"d0d4b5aa9c0518f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:56:59.523463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:56:59.525153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.223:2379"}
	{"level":"info","ts":"2024-06-03T11:56:59.528489Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:56:59.52881Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:56:59.528868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:56:59.533154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65] <==
	{"level":"info","ts":"2024-06-03T11:56:52.212528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T11:56:52.212635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T11:56:52.2127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgPreVoteResp from 5072550c343bb357 at term 2"}
	{"level":"info","ts":"2024-06-03T11:56:52.212751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:52.212782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgVoteResp from 5072550c343bb357 at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:52.212817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:52.21285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5072550c343bb357 elected leader 5072550c343bb357 at term 3"}
	{"level":"info","ts":"2024-06-03T11:56:52.222837Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5072550c343bb357","local-member-attributes":"{Name:kubernetes-upgrade-179482 ClientURLs:[https://192.168.72.223:2379]}","request-path":"/0/members/5072550c343bb357/attributes","cluster-id":"d0d4b5aa9c0518f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:56:52.22364Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:56:52.224049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:56:52.226047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:56:52.227466Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:56:52.235826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T11:56:52.300762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.223:2379"}
	{"level":"info","ts":"2024-06-03T11:56:52.443673Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-03T11:56:52.443932Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-179482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.223:2380"],"advertise-client-urls":["https://192.168.72.223:2379"]}
	{"level":"warn","ts":"2024-06-03T11:56:52.444186Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:56:52.4446Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:56:52.457436Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:34310","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:34310: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:56:52.461339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.223:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T11:56:52.461529Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.223:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T11:56:52.461873Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5072550c343bb357","current-leader-member-id":"5072550c343bb357"}
	{"level":"info","ts":"2024-06-03T11:56:52.471211Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-06-03T11:56:52.471582Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-06-03T11:56:52.471667Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-179482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.223:2380"],"advertise-client-urls":["https://192.168.72.223:2379"]}
	
	
	==> kernel <==
	 11:57:06 up 1 min,  0 users,  load average: 2.21, 0.59, 0.20
	Linux kubernetes-upgrade-179482 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2] <==
	I0603 11:56:50.726642       1 options.go:221] external host was not specified, using 192.168.72.223
	I0603 11:56:50.729534       1 server.go:148] Version: v1.30.1
	I0603 11:56:50.729609       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:56:52.207919       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 11:56:52.254544       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:56:52.264673       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 11:56:52.284553       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 11:56:52.284838       1 instance.go:299] Using reconciler: lease
	W0603 11:56:52.451486       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W0603 11:56:52.459620       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:56:52.459916       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 11:56:52.460162       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [65b3f18bb288b05884a88e5395ecc88a95d46ebdf3666af459257ab95996ad15] <==
	I0603 11:57:01.331928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 11:57:01.332069       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 11:57:01.392843       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:57:01.400322       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:57:01.400468       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:57:01.400693       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:57:01.400781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:57:01.402478       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:57:01.402552       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:57:01.402582       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:57:01.402611       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:57:01.434976       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 11:57:01.458129       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:57:01.458186       1 policy_source.go:224] refreshing policies
	I0603 11:57:01.478065       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:57:01.478721       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 11:57:01.478895       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0603 11:57:01.488106       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 11:57:01.510239       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 11:57:02.279751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 11:57:03.346328       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 11:57:03.384075       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 11:57:03.435203       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 11:57:03.473306       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 11:57:03.487436       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2] <==
	
	
	==> kube-controller-manager [b0e5c0859b78be2bffa650790aa779662748efc4c18b862e2583d4cea4920bef] <==
	I0603 11:57:03.777326       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	E0603 11:57:03.779950       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 11:57:03.780046       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 11:57:03.818191       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 11:57:03.818290       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 11:57:03.818300       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 11:57:03.969309       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 11:57:03.969559       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 11:57:03.969574       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 11:57:03.969601       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 11:57:04.018744       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 11:57:04.018823       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 11:57:04.018834       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 11:57:04.119786       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 11:57:04.119869       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 11:57:04.119881       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 11:57:04.218085       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 11:57:04.218169       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 11:57:04.218182       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 11:57:04.268724       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 11:57:04.268785       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 11:57:04.268796       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 11:57:04.320639       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 11:57:04.320817       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 11:57:04.320830       1 shared_informer.go:313] Waiting for caches to sync for job
	
	
	==> kube-scheduler [7cbc129036d45553bb48a38cc803ab2c03daf6d58d6266d82554816dea799410] <==
	I0603 11:56:59.791968       1 serving.go:380] Generated self-signed cert in-memory
	I0603 11:57:01.441888       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 11:57:01.441931       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:57:01.447510       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 11:57:01.447628       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0603 11:57:01.447637       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0603 11:57:01.447685       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:57:01.457972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 11:57:01.458019       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:57:01.458043       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0603 11:57:01.458051       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0603 11:57:01.548182       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0603 11:57:01.558851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0603 11:57:01.558971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7] <==
	I0603 11:56:52.102048       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730033    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/2ef691adf19f0315ffcdacd8a25d59a4-etcd-certs\") pod \"etcd-kubernetes-upgrade-179482\" (UID: \"2ef691adf19f0315ffcdacd8a25d59a4\") " pod="kube-system/etcd-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730071    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4232830eca4867e0e52b511b9adafaac-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-179482\" (UID: \"4232830eca4867e0e52b511b9adafaac\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730098    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4232830eca4867e0e52b511b9adafaac-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-179482\" (UID: \"4232830eca4867e0e52b511b9adafaac\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730140    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a69b4e624ea65f29475320c9ae3f637e-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-179482\" (UID: \"a69b4e624ea65f29475320c9ae3f637e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730171    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a69b4e624ea65f29475320c9ae3f637e-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-179482\" (UID: \"a69b4e624ea65f29475320c9ae3f637e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730222    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a69b4e624ea65f29475320c9ae3f637e-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-179482\" (UID: \"a69b4e624ea65f29475320c9ae3f637e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730247    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc0fe3a4c0d99ebb3d5edb54763360dd-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-179482\" (UID: \"bc0fe3a4c0d99ebb3d5edb54763360dd\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730268    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/2ef691adf19f0315ffcdacd8a25d59a4-etcd-data\") pod \"etcd-kubernetes-upgrade-179482\" (UID: \"2ef691adf19f0315ffcdacd8a25d59a4\") " pod="kube-system/etcd-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730356    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4232830eca4867e0e52b511b9adafaac-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-179482\" (UID: \"4232830eca4867e0e52b511b9adafaac\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730446    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a69b4e624ea65f29475320c9ae3f637e-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-179482\" (UID: \"a69b4e624ea65f29475320c9ae3f637e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730472    2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a69b4e624ea65f29475320c9ae3f637e-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-179482\" (UID: \"a69b4e624ea65f29475320c9ae3f637e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.730836    2323 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: E0603 11:56:57.732035    2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.223:8443: connect: connection refused" node="kubernetes-upgrade-179482"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.876242    2323 scope.go:117] "RemoveContainer" containerID="b6b3abe451698a92fd0b66056c3f81e1d0da2db4a1a07ebb2e4fc39026b922d7"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.877649    2323 scope.go:117] "RemoveContainer" containerID="496068f2ad5f0e4f88afc9f3191657dde11b1800d1b00151b0f38d7c47869d65"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.878995    2323 scope.go:117] "RemoveContainer" containerID="2f3943e07ab262c875a05ff6b359b607003c025080255ad64aaf751e7fdff1d2"
	Jun 03 11:56:57 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:57.879748    2323 scope.go:117] "RemoveContainer" containerID="9a4b9e58c50318b12681eae9f98ccfa6e79a3e580882dff361f999578e8c91c2"
	Jun 03 11:56:58 kubernetes-upgrade-179482 kubelet[2323]: E0603 11:56:58.034090    2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-179482?timeout=10s\": dial tcp 192.168.72.223:8443: connect: connection refused" interval="800ms"
	Jun 03 11:56:58 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:58.133641    2323 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-179482"
	Jun 03 11:56:58 kubernetes-upgrade-179482 kubelet[2323]: E0603 11:56:58.140079    2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.223:8443: connect: connection refused" node="kubernetes-upgrade-179482"
	Jun 03 11:56:58 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:56:58.941830    2323 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-179482"
	Jun 03 11:57:01 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:57:01.404996    2323 apiserver.go:52] "Watching apiserver"
	Jun 03 11:57:01 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:57:01.429610    2323 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 11:57:01 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:57:01.514444    2323 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-179482"
	Jun 03 11:57:01 kubernetes-upgrade-179482 kubelet[2323]: I0603 11:57:01.514919    2323 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-179482"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-179482 -n kubernetes-upgrade-179482
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-179482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-179482 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-179482 describe pod storage-provisioner: exit status 1 (71.273799ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-179482 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-179482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-179482
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-179482: (1.206091894s)
--- FAIL: TestKubernetesUpgrade (390.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m33.245428736s)

                                                
                                                
-- stdout --
	* [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:56:59.954096   67501 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:56:59.954231   67501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:56:59.954242   67501 out.go:304] Setting ErrFile to fd 2...
	I0603 11:56:59.954249   67501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:56:59.954503   67501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:56:59.955232   67501 out.go:298] Setting JSON to false
	I0603 11:56:59.956626   67501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5965,"bootTime":1717409855,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:56:59.956704   67501 start.go:139] virtualization: kvm guest
	I0603 11:56:59.959087   67501 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:56:59.960889   67501 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:56:59.962159   67501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:56:59.960942   67501 notify.go:220] Checking for updates...
	I0603 11:56:59.964732   67501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:56:59.966070   67501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:56:59.967269   67501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:56:59.968428   67501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:56:59.970236   67501 config.go:182] Loaded profile config "bridge-034991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970340   67501 config.go:182] Loaded profile config "calico-034991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970420   67501 config.go:182] Loaded profile config "kubernetes-upgrade-179482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:56:59.970520   67501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:57:00.018052   67501 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 11:57:00.019276   67501 start.go:297] selected driver: kvm2
	I0603 11:57:00.019304   67501 start.go:901] validating driver "kvm2" against <nil>
	I0603 11:57:00.019320   67501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:57:00.020249   67501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:57:00.020352   67501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 11:57:00.038824   67501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 11:57:00.038881   67501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 11:57:00.039212   67501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 11:57:00.039252   67501 cni.go:84] Creating CNI manager for ""
	I0603 11:57:00.039264   67501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:57:00.039274   67501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 11:57:00.039344   67501 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:57:00.039474   67501 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 11:57:00.041379   67501 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 11:57:00.043029   67501 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 11:57:00.043101   67501 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 11:57:00.043110   67501 cache.go:56] Caching tarball of preloaded images
	I0603 11:57:00.043191   67501 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 11:57:00.043199   67501 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 11:57:00.043298   67501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 11:57:00.043320   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json: {Name:mk53959f68545452763d1b73ef91e0947b64a6ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:00.043447   67501 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 11:57:00.043488   67501 start.go:364] duration metric: took 21.579µs to acquireMachinesLock for "old-k8s-version-905554"
	I0603 11:57:00.043503   67501 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 11:57:00.043559   67501 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 11:57:00.045879   67501 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 11:57:00.046050   67501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:57:00.046102   67501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:57:00.064746   67501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0603 11:57:00.065231   67501 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:57:00.065884   67501 main.go:141] libmachine: Using API Version  1
	I0603 11:57:00.065911   67501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:57:00.066266   67501 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:57:00.066472   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 11:57:00.066654   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:00.066901   67501 start.go:159] libmachine.API.Create for "old-k8s-version-905554" (driver="kvm2")
	I0603 11:57:00.066927   67501 client.go:168] LocalClient.Create starting
	I0603 11:57:00.066960   67501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 11:57:00.067005   67501 main.go:141] libmachine: Decoding PEM data...
	I0603 11:57:00.067024   67501 main.go:141] libmachine: Parsing certificate...
	I0603 11:57:00.067116   67501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 11:57:00.067148   67501 main.go:141] libmachine: Decoding PEM data...
	I0603 11:57:00.067167   67501 main.go:141] libmachine: Parsing certificate...
	I0603 11:57:00.067194   67501 main.go:141] libmachine: Running pre-create checks...
	I0603 11:57:00.067206   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .PreCreateCheck
	I0603 11:57:00.067647   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 11:57:00.068172   67501 main.go:141] libmachine: Creating machine...
	I0603 11:57:00.068190   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .Create
	I0603 11:57:00.068748   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating KVM machine...
	I0603 11:57:00.069870   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found existing default KVM network
	I0603 11:57:00.071931   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.071636   67523 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015920}
	I0603 11:57:00.071950   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | created network xml: 
	I0603 11:57:00.071962   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | <network>
	I0603 11:57:00.071971   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <name>mk-old-k8s-version-905554</name>
	I0603 11:57:00.071981   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <dns enable='no'/>
	I0603 11:57:00.071988   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   
	I0603 11:57:00.071996   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 11:57:00.072009   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |     <dhcp>
	I0603 11:57:00.072016   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 11:57:00.072023   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |     </dhcp>
	I0603 11:57:00.072035   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   </ip>
	I0603 11:57:00.072042   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG |   
	I0603 11:57:00.072048   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | </network>
	I0603 11:57:00.072057   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | 
	I0603 11:57:00.076916   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | trying to create private KVM network mk-old-k8s-version-905554 192.168.39.0/24...
	I0603 11:57:00.183767   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 ...
	I0603 11:57:00.183808   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | private KVM network mk-old-k8s-version-905554 192.168.39.0/24 created
	I0603 11:57:00.183835   67501 main.go:141] libmachine: (old-k8s-version-905554) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 11:57:00.183848   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.182104   67523 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:57:00.183865   67501 main.go:141] libmachine: (old-k8s-version-905554) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 11:57:00.476438   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.476249   67523 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa...
	I0603 11:57:00.619353   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.619208   67523 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/old-k8s-version-905554.rawdisk...
	I0603 11:57:00.619382   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Writing magic tar header
	I0603 11:57:00.619405   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Writing SSH key tar header
	I0603 11:57:00.619418   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:00.619371   67523 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 ...
	I0603 11:57:00.619557   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554
	I0603 11:57:00.619594   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554 (perms=drwx------)
	I0603 11:57:00.619606   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 11:57:00.619624   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:57:00.619637   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 11:57:00.619651   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 11:57:00.619659   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home/jenkins
	I0603 11:57:00.619672   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Checking permissions on dir: /home
	I0603 11:57:00.619681   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Skipping /home - not owner
	I0603 11:57:00.619693   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 11:57:00.619704   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 11:57:00.619720   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 11:57:00.619730   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 11:57:00.619743   67501 main.go:141] libmachine: (old-k8s-version-905554) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 11:57:00.619751   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 11:57:00.621020   67501 main.go:141] libmachine: (old-k8s-version-905554) define libvirt domain using xml: 
	I0603 11:57:00.621036   67501 main.go:141] libmachine: (old-k8s-version-905554) <domain type='kvm'>
	I0603 11:57:00.621047   67501 main.go:141] libmachine: (old-k8s-version-905554)   <name>old-k8s-version-905554</name>
	I0603 11:57:00.621055   67501 main.go:141] libmachine: (old-k8s-version-905554)   <memory unit='MiB'>2200</memory>
	I0603 11:57:00.621064   67501 main.go:141] libmachine: (old-k8s-version-905554)   <vcpu>2</vcpu>
	I0603 11:57:00.621075   67501 main.go:141] libmachine: (old-k8s-version-905554)   <features>
	I0603 11:57:00.621083   67501 main.go:141] libmachine: (old-k8s-version-905554)     <acpi/>
	I0603 11:57:00.621090   67501 main.go:141] libmachine: (old-k8s-version-905554)     <apic/>
	I0603 11:57:00.621099   67501 main.go:141] libmachine: (old-k8s-version-905554)     <pae/>
	I0603 11:57:00.621118   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621139   67501 main.go:141] libmachine: (old-k8s-version-905554)   </features>
	I0603 11:57:00.621149   67501 main.go:141] libmachine: (old-k8s-version-905554)   <cpu mode='host-passthrough'>
	I0603 11:57:00.621156   67501 main.go:141] libmachine: (old-k8s-version-905554)   
	I0603 11:57:00.621162   67501 main.go:141] libmachine: (old-k8s-version-905554)   </cpu>
	I0603 11:57:00.621170   67501 main.go:141] libmachine: (old-k8s-version-905554)   <os>
	I0603 11:57:00.621178   67501 main.go:141] libmachine: (old-k8s-version-905554)     <type>hvm</type>
	I0603 11:57:00.621190   67501 main.go:141] libmachine: (old-k8s-version-905554)     <boot dev='cdrom'/>
	I0603 11:57:00.621200   67501 main.go:141] libmachine: (old-k8s-version-905554)     <boot dev='hd'/>
	I0603 11:57:00.621210   67501 main.go:141] libmachine: (old-k8s-version-905554)     <bootmenu enable='no'/>
	I0603 11:57:00.621219   67501 main.go:141] libmachine: (old-k8s-version-905554)   </os>
	I0603 11:57:00.621228   67501 main.go:141] libmachine: (old-k8s-version-905554)   <devices>
	I0603 11:57:00.621239   67501 main.go:141] libmachine: (old-k8s-version-905554)     <disk type='file' device='cdrom'>
	I0603 11:57:00.621257   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/boot2docker.iso'/>
	I0603 11:57:00.621269   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target dev='hdc' bus='scsi'/>
	I0603 11:57:00.621281   67501 main.go:141] libmachine: (old-k8s-version-905554)       <readonly/>
	I0603 11:57:00.621296   67501 main.go:141] libmachine: (old-k8s-version-905554)     </disk>
	I0603 11:57:00.621309   67501 main.go:141] libmachine: (old-k8s-version-905554)     <disk type='file' device='disk'>
	I0603 11:57:00.621322   67501 main.go:141] libmachine: (old-k8s-version-905554)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 11:57:00.621336   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/old-k8s-version-905554.rawdisk'/>
	I0603 11:57:00.621344   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target dev='hda' bus='virtio'/>
	I0603 11:57:00.621353   67501 main.go:141] libmachine: (old-k8s-version-905554)     </disk>
	I0603 11:57:00.621367   67501 main.go:141] libmachine: (old-k8s-version-905554)     <interface type='network'>
	I0603 11:57:00.621380   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source network='mk-old-k8s-version-905554'/>
	I0603 11:57:00.621391   67501 main.go:141] libmachine: (old-k8s-version-905554)       <model type='virtio'/>
	I0603 11:57:00.621402   67501 main.go:141] libmachine: (old-k8s-version-905554)     </interface>
	I0603 11:57:00.621410   67501 main.go:141] libmachine: (old-k8s-version-905554)     <interface type='network'>
	I0603 11:57:00.621419   67501 main.go:141] libmachine: (old-k8s-version-905554)       <source network='default'/>
	I0603 11:57:00.621429   67501 main.go:141] libmachine: (old-k8s-version-905554)       <model type='virtio'/>
	I0603 11:57:00.621441   67501 main.go:141] libmachine: (old-k8s-version-905554)     </interface>
	I0603 11:57:00.621450   67501 main.go:141] libmachine: (old-k8s-version-905554)     <serial type='pty'>
	I0603 11:57:00.621463   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target port='0'/>
	I0603 11:57:00.621473   67501 main.go:141] libmachine: (old-k8s-version-905554)     </serial>
	I0603 11:57:00.621485   67501 main.go:141] libmachine: (old-k8s-version-905554)     <console type='pty'>
	I0603 11:57:00.621493   67501 main.go:141] libmachine: (old-k8s-version-905554)       <target type='serial' port='0'/>
	I0603 11:57:00.621501   67501 main.go:141] libmachine: (old-k8s-version-905554)     </console>
	I0603 11:57:00.621509   67501 main.go:141] libmachine: (old-k8s-version-905554)     <rng model='virtio'>
	I0603 11:57:00.621525   67501 main.go:141] libmachine: (old-k8s-version-905554)       <backend model='random'>/dev/random</backend>
	I0603 11:57:00.621535   67501 main.go:141] libmachine: (old-k8s-version-905554)     </rng>
	I0603 11:57:00.621544   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621553   67501 main.go:141] libmachine: (old-k8s-version-905554)     
	I0603 11:57:00.621562   67501 main.go:141] libmachine: (old-k8s-version-905554)   </devices>
	I0603 11:57:00.621572   67501 main.go:141] libmachine: (old-k8s-version-905554) </domain>
	I0603 11:57:00.621582   67501 main.go:141] libmachine: (old-k8s-version-905554) 
	I0603 11:57:00.626445   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:56:c2:3d in network default
	I0603 11:57:00.627310   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 11:57:00.627333   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:00.628331   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 11:57:00.628780   67501 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 11:57:00.629459   67501 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 11:57:00.630489   67501 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 11:57:02.202982   67501 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 11:57:02.204116   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.204877   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.205060   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.204976   67523 retry.go:31] will retry after 222.199907ms: waiting for machine to come up
	I0603 11:57:02.428791   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.429461   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.429490   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.429419   67523 retry.go:31] will retry after 327.649197ms: waiting for machine to come up
	I0603 11:57:02.759133   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:02.759765   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:02.759794   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:02.759718   67523 retry.go:31] will retry after 314.31801ms: waiting for machine to come up
	I0603 11:57:03.075988   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:03.076794   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:03.076825   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:03.076751   67523 retry.go:31] will retry after 471.996565ms: waiting for machine to come up
	I0603 11:57:03.550773   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:03.551407   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:03.551429   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:03.551338   67523 retry.go:31] will retry after 743.277114ms: waiting for machine to come up
	I0603 11:57:04.296512   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:04.297221   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:04.297246   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:04.297157   67523 retry.go:31] will retry after 787.192028ms: waiting for machine to come up
	I0603 11:57:05.086344   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:05.086926   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:05.086968   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:05.086872   67523 retry.go:31] will retry after 786.846254ms: waiting for machine to come up
	I0603 11:57:05.875276   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:05.875982   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:05.876010   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:05.875896   67523 retry.go:31] will retry after 1.13439787s: waiting for machine to come up
	I0603 11:57:07.012396   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:07.012888   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:07.012903   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:07.012822   67523 retry.go:31] will retry after 1.707289997s: waiting for machine to come up
	I0603 11:57:08.722042   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:08.722572   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:08.722608   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:08.722524   67523 retry.go:31] will retry after 2.284609183s: waiting for machine to come up
	I0603 11:57:11.009444   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:11.010060   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:11.010099   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:11.010043   67523 retry.go:31] will retry after 1.936109259s: waiting for machine to come up
	I0603 11:57:12.948408   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:12.949049   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:12.949084   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:12.948992   67523 retry.go:31] will retry after 2.945231897s: waiting for machine to come up
	I0603 11:57:15.895527   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:15.896038   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:15.896068   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:15.895992   67523 retry.go:31] will retry after 4.36932539s: waiting for machine to come up
	I0603 11:57:20.267069   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:20.267541   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 11:57:20.267566   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 11:57:20.267500   67523 retry.go:31] will retry after 3.920481877s: waiting for machine to come up
	I0603 11:57:24.192086   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.192596   67501 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 11:57:24.192616   67501 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 11:57:24.192632   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.193033   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554
	I0603 11:57:24.270776   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 11:57:24.270806   67501 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 11:57:24.270818   67501 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 11:57:24.273603   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.274042   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.274074   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.274249   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 11:57:24.274277   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 11:57:24.274310   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 11:57:24.274336   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 11:57:24.274375   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 11:57:24.399423   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 11:57:24.399574   67501 main.go:141] libmachine: (old-k8s-version-905554) KVM machine creation complete!
	I0603 11:57:24.399937   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 11:57:24.400492   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:24.400695   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:24.400878   67501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 11:57:24.400893   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 11:57:24.402436   67501 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 11:57:24.402451   67501 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 11:57:24.402459   67501 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 11:57:24.402468   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:24.405519   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.405980   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.406006   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.406206   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:24.406382   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.406529   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.406698   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:24.406886   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:24.407129   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:24.407146   67501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 11:57:24.518560   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:57:24.518603   67501 main.go:141] libmachine: Detecting the provisioner...
	I0603 11:57:24.518615   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:24.521524   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.521948   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.521977   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.522167   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:24.522378   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.522560   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.522744   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:24.522935   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:24.523146   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:24.523159   67501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 11:57:24.627926   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 11:57:24.628017   67501 main.go:141] libmachine: found compatible host: buildroot
	I0603 11:57:24.628032   67501 main.go:141] libmachine: Provisioning with buildroot...
	I0603 11:57:24.628048   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 11:57:24.628306   67501 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 11:57:24.628350   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 11:57:24.628566   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:24.631768   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.632137   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.632186   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.632384   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:24.632604   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.632758   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.632919   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:24.633094   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:24.633330   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:24.633355   67501 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 11:57:24.774975   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 11:57:24.775000   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:24.777855   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.778303   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.778340   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.778536   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:24.778725   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.778897   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:24.779056   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:24.779268   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:24.779430   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:24.779447   67501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 11:57:24.901766   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 11:57:24.901799   67501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 11:57:24.901887   67501 buildroot.go:174] setting up certificates
	I0603 11:57:24.901900   67501 provision.go:84] configureAuth start
	I0603 11:57:24.901917   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 11:57:24.902242   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 11:57:24.905252   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.905815   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.905845   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.905989   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:24.908639   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.908997   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:24.909022   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:24.909207   67501 provision.go:143] copyHostCerts
	I0603 11:57:24.909273   67501 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 11:57:24.909290   67501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 11:57:24.909352   67501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 11:57:24.909445   67501 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 11:57:24.909454   67501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 11:57:24.909476   67501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 11:57:24.909540   67501 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 11:57:24.909547   67501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 11:57:24.909589   67501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 11:57:24.909686   67501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 11:57:25.004055   67501 provision.go:177] copyRemoteCerts
	I0603 11:57:25.004105   67501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 11:57:25.004129   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:25.332900   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.333450   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.333482   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.333750   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:25.333977   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.334232   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:25.334438   67501 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 11:57:25.426155   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 11:57:25.451975   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 11:57:25.476391   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 11:57:25.503284   67501 provision.go:87] duration metric: took 601.369285ms to configureAuth
	I0603 11:57:25.503319   67501 buildroot.go:189] setting minikube options for container-runtime
	I0603 11:57:25.503492   67501 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 11:57:25.503556   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:25.506452   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.506883   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.506917   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.507213   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:25.507424   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.507674   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.507881   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:25.508102   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:25.508332   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:25.508351   67501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 11:57:25.823108   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 11:57:25.823144   67501 main.go:141] libmachine: Checking connection to Docker...
	I0603 11:57:25.823155   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetURL
	I0603 11:57:25.824751   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using libvirt version 6000000
	I0603 11:57:25.827773   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.828178   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.828211   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.828370   67501 main.go:141] libmachine: Docker is up and running!
	I0603 11:57:25.828385   67501 main.go:141] libmachine: Reticulating splines...
	I0603 11:57:25.828394   67501 client.go:171] duration metric: took 25.761458329s to LocalClient.Create
	I0603 11:57:25.828421   67501 start.go:167] duration metric: took 25.761520113s to libmachine.API.Create "old-k8s-version-905554"
	I0603 11:57:25.828444   67501 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 11:57:25.828469   67501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 11:57:25.828492   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:25.828829   67501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 11:57:25.828895   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:25.831492   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.831922   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.831952   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.832162   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:25.832352   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.832541   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:25.832694   67501 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 11:57:25.921680   67501 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 11:57:25.927265   67501 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 11:57:25.927288   67501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 11:57:25.927357   67501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 11:57:25.927449   67501 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 11:57:25.927573   67501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 11:57:25.937300   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:57:25.969058   67501 start.go:296] duration metric: took 140.597298ms for postStartSetup
	I0603 11:57:25.969107   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 11:57:25.969694   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 11:57:25.972689   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.973047   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.973089   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.973404   67501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 11:57:25.973614   67501 start.go:128] duration metric: took 25.930045279s to createHost
	I0603 11:57:25.973641   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:25.975966   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.976307   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:25.976334   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:25.976485   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:25.976666   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.976832   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:25.976980   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:25.977152   67501 main.go:141] libmachine: Using SSH client type: native
	I0603 11:57:25.977366   67501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 11:57:25.977392   67501 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 11:57:26.087958   67501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717415846.063980045
	
	I0603 11:57:26.087992   67501 fix.go:216] guest clock: 1717415846.063980045
	I0603 11:57:26.088001   67501 fix.go:229] Guest: 2024-06-03 11:57:26.063980045 +0000 UTC Remote: 2024-06-03 11:57:25.973627636 +0000 UTC m=+26.059353297 (delta=90.352409ms)
	I0603 11:57:26.088024   67501 fix.go:200] guest clock delta is within tolerance: 90.352409ms
	I0603 11:57:26.088031   67501 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 26.044535634s
	I0603 11:57:26.088061   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:26.088309   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 11:57:26.091294   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.091789   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:26.091814   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.092041   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:26.092538   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:26.092738   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 11:57:26.092880   67501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 11:57:26.092944   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:26.092949   67501 ssh_runner.go:195] Run: cat /version.json
	I0603 11:57:26.092982   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 11:57:26.095755   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.096149   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:26.096170   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.096195   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.096354   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:26.096556   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:26.096613   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:26.096649   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:26.096749   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:26.096827   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 11:57:26.096898   67501 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 11:57:26.097019   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 11:57:26.097209   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 11:57:26.097381   67501 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 11:57:26.184068   67501 ssh_runner.go:195] Run: systemctl --version
	I0603 11:57:26.212346   67501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 11:57:26.379823   67501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 11:57:26.388393   67501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 11:57:26.388464   67501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 11:57:26.409856   67501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 11:57:26.409883   67501 start.go:494] detecting cgroup driver to use...
	I0603 11:57:26.409943   67501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 11:57:26.430556   67501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 11:57:26.450463   67501 docker.go:217] disabling cri-docker service (if available) ...
	I0603 11:57:26.450522   67501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 11:57:26.469657   67501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 11:57:26.485661   67501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 11:57:26.616441   67501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 11:57:26.796006   67501 docker.go:233] disabling docker service ...
	I0603 11:57:26.796077   67501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 11:57:26.813068   67501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 11:57:26.827998   67501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 11:57:26.997481   67501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 11:57:27.143507   67501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 11:57:27.158438   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 11:57:27.181400   67501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 11:57:27.181471   67501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:57:27.192656   67501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 11:57:27.192721   67501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:57:27.204311   67501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:57:27.220042   67501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 11:57:27.232270   67501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 11:57:27.245596   67501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 11:57:27.258475   67501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 11:57:27.258527   67501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 11:57:27.276714   67501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 11:57:27.288448   67501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:57:27.446486   67501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 11:57:27.601235   67501 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 11:57:27.601312   67501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 11:57:27.606223   67501 start.go:562] Will wait 60s for crictl version
	I0603 11:57:27.606293   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:27.610252   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 11:57:27.649362   67501 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 11:57:27.649427   67501 ssh_runner.go:195] Run: crio --version
	I0603 11:57:27.688735   67501 ssh_runner.go:195] Run: crio --version
	I0603 11:57:27.732683   67501 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 11:57:27.734059   67501 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 11:57:27.737630   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:27.738026   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 12:57:16 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 11:57:27.738054   67501 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 11:57:27.738265   67501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 11:57:27.742774   67501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:57:27.755517   67501 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 11:57:27.755658   67501 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 11:57:27.755718   67501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:57:27.791869   67501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 11:57:27.791940   67501 ssh_runner.go:195] Run: which lz4
	I0603 11:57:27.797230   67501 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 11:57:27.802601   67501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 11:57:27.802626   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 11:57:29.542981   67501 crio.go:462] duration metric: took 1.745786404s to copy over tarball
	I0603 11:57:29.543113   67501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 11:57:32.393830   67501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.850680448s)
	I0603 11:57:32.393861   67501 crio.go:469] duration metric: took 2.85084713s to extract the tarball
	I0603 11:57:32.393905   67501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 11:57:32.441839   67501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 11:57:32.500290   67501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 11:57:32.500315   67501 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 11:57:32.500366   67501 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:57:32.500470   67501 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:57:32.500488   67501 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 11:57:32.500544   67501 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 11:57:32.500687   67501 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:57:32.500707   67501 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:57:32.500730   67501 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:57:32.500917   67501 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:57:32.503186   67501 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:57:32.503352   67501 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 11:57:32.504120   67501 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:57:32.504331   67501 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:57:32.504430   67501 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 11:57:32.504507   67501 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:57:32.505241   67501 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:57:32.505379   67501 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:57:32.645323   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:57:32.659429   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 11:57:32.662189   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:57:32.673343   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:57:32.675103   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:57:32.700734   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 11:57:32.735970   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 11:57:32.749473   67501 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 11:57:32.749537   67501 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:57:32.749586   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.799091   67501 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 11:57:32.799138   67501 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 11:57:32.799186   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.831542   67501 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 11:57:32.831594   67501 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:57:32.831644   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.849381   67501 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 11:57:32.849423   67501 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:57:32.849472   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.849646   67501 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 11:57:32.849688   67501 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:57:32.849729   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.894657   67501 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 11:57:32.894696   67501 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 11:57:32.894731   67501 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 11:57:32.894750   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 11:57:32.894767   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.894769   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 11:57:32.894704   67501 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 11:57:32.894829   67501 ssh_runner.go:195] Run: which crictl
	I0603 11:57:32.894846   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 11:57:32.894857   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 11:57:32.894789   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 11:57:33.010527   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 11:57:33.013112   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 11:57:33.013169   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 11:57:33.013237   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 11:57:33.013243   67501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 11:57:33.013334   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 11:57:33.013352   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 11:57:33.054524   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 11:57:33.065005   67501 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 11:57:33.400851   67501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 11:57:33.543154   67501 cache_images.go:92] duration metric: took 1.042825431s to LoadCachedImages
	W0603 11:57:33.543247   67501 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0603 11:57:33.543265   67501 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 11:57:33.543393   67501 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 11:57:33.543481   67501 ssh_runner.go:195] Run: crio config
	I0603 11:57:33.601534   67501 cni.go:84] Creating CNI manager for ""
	I0603 11:57:33.601556   67501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 11:57:33.601567   67501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 11:57:33.601591   67501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 11:57:33.601741   67501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 11:57:33.601794   67501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 11:57:33.612419   67501 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 11:57:33.612490   67501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 11:57:33.622435   67501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 11:57:33.642463   67501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 11:57:33.660971   67501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 11:57:33.679459   67501 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 11:57:33.683886   67501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 11:57:33.700083   67501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 11:57:33.839966   67501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 11:57:33.861758   67501 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 11:57:33.861788   67501 certs.go:194] generating shared ca certs ...
	I0603 11:57:33.861809   67501 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:33.861995   67501 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 11:57:33.862046   67501 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 11:57:33.862057   67501 certs.go:256] generating profile certs ...
	I0603 11:57:33.862118   67501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 11:57:33.862137   67501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.crt with IP's: []
	I0603 11:57:34.083356   67501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.crt ...
	I0603 11:57:34.083382   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.crt: {Name:mked7dcab8cc451284b6aab86d8d13a79c65f950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.083535   67501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key ...
	I0603 11:57:34.083549   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key: {Name:mk5fe553d0b7d97cfc1882e5e8116cbc37d4c94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.083626   67501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 11:57:34.083642   67501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt.0d34b22c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.155]
	I0603 11:57:34.233246   67501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt.0d34b22c ...
	I0603 11:57:34.233270   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt.0d34b22c: {Name:mk93236c85c699afbd41e7bf78859b8df1c89639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.233425   67501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c ...
	I0603 11:57:34.233441   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c: {Name:mk18935e0d628c67cc68a882622267d954dd3587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.233539   67501 certs.go:381] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt.0d34b22c -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt
	I0603 11:57:34.233632   67501 certs.go:385] copying /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c -> /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key
	I0603 11:57:34.233690   67501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 11:57:34.233703   67501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt with IP's: []
	I0603 11:57:34.352144   67501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt ...
	I0603 11:57:34.352170   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt: {Name:mk0a796d633a749c9ea20aed46e97bae303cdb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.352325   67501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key ...
	I0603 11:57:34.352342   67501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key: {Name:mka2f28b78615c030e22fe112371868b3cdd54ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 11:57:34.352505   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 11:57:34.352539   67501 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 11:57:34.352550   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 11:57:34.352570   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 11:57:34.352594   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 11:57:34.352615   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 11:57:34.352653   67501 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 11:57:34.353228   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 11:57:34.380410   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 11:57:34.405636   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 11:57:34.430900   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 11:57:34.455156   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 11:57:34.480483   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 11:57:34.506059   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 11:57:34.530542   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 11:57:34.554979   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 11:57:34.580415   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 11:57:34.605410   67501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 11:57:34.630516   67501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 11:57:34.648040   67501 ssh_runner.go:195] Run: openssl version
	I0603 11:57:34.653968   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 11:57:34.667523   67501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:57:34.672336   67501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:57:34.672380   67501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 11:57:34.678485   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 11:57:34.694067   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 11:57:34.714807   67501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 11:57:34.721330   67501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 11:57:34.721401   67501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 11:57:34.733135   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 11:57:34.758371   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 11:57:34.776705   67501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 11:57:34.783936   67501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 11:57:34.783992   67501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 11:57:34.791136   67501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 11:57:34.803008   67501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 11:57:34.807823   67501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 11:57:34.807883   67501 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 11:57:34.807988   67501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 11:57:34.808029   67501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 11:57:34.845370   67501 cri.go:89] found id: ""
	I0603 11:57:34.845449   67501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 11:57:34.857817   67501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 11:57:34.870019   67501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 11:57:34.882690   67501 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 11:57:34.882712   67501 kubeadm.go:156] found existing configuration files:
	
	I0603 11:57:34.882757   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 11:57:34.892803   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 11:57:34.892859   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 11:57:34.902927   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 11:57:34.913330   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 11:57:34.913407   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 11:57:34.924208   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 11:57:34.934402   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 11:57:34.934456   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 11:57:34.945213   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 11:57:34.955108   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 11:57:35.028269   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 11:57:35.039738   67501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 11:57:35.319991   67501 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 11:59:34.416801   67501 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 11:59:34.416980   67501 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 11:59:34.418750   67501 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 11:59:34.418847   67501 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 11:59:34.419011   67501 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 11:59:34.419273   67501 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 11:59:34.419472   67501 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 11:59:34.419611   67501 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 11:59:34.421265   67501 out.go:204]   - Generating certificates and keys ...
	I0603 11:59:34.421367   67501 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 11:59:34.421434   67501 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 11:59:34.421507   67501 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 11:59:34.421562   67501 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 11:59:34.421641   67501 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 11:59:34.421716   67501 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 11:59:34.421800   67501 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 11:59:34.421997   67501 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	I0603 11:59:34.422079   67501 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 11:59:34.422242   67501 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	I0603 11:59:34.422339   67501 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 11:59:34.422418   67501 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 11:59:34.422480   67501 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 11:59:34.422550   67501 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 11:59:34.422624   67501 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 11:59:34.422727   67501 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 11:59:34.422830   67501 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 11:59:34.422912   67501 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 11:59:34.423098   67501 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 11:59:34.423220   67501 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 11:59:34.423279   67501 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 11:59:34.423384   67501 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 11:59:34.424801   67501 out.go:204]   - Booting up control plane ...
	I0603 11:59:34.424881   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 11:59:34.424968   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 11:59:34.425054   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 11:59:34.425140   67501 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 11:59:34.425320   67501 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 11:59:34.425385   67501 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 11:59:34.425472   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:59:34.425709   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:59:34.425799   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:59:34.426033   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:59:34.426125   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:59:34.426371   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:59:34.426458   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:59:34.426693   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:59:34.426787   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 11:59:34.427023   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 11:59:34.427051   67501 kubeadm.go:309] 
	I0603 11:59:34.427102   67501 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 11:59:34.427170   67501 kubeadm.go:309] 		timed out waiting for the condition
	I0603 11:59:34.427192   67501 kubeadm.go:309] 
	I0603 11:59:34.427245   67501 kubeadm.go:309] 	This error is likely caused by:
	I0603 11:59:34.427294   67501 kubeadm.go:309] 		- The kubelet is not running
	I0603 11:59:34.427422   67501 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 11:59:34.427433   67501 kubeadm.go:309] 
	I0603 11:59:34.427581   67501 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 11:59:34.427637   67501 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 11:59:34.427685   67501 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 11:59:34.427694   67501 kubeadm.go:309] 
	I0603 11:59:34.427865   67501 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 11:59:34.428000   67501 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 11:59:34.428017   67501 kubeadm.go:309] 
	I0603 11:59:34.428148   67501 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 11:59:34.428263   67501 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 11:59:34.428376   67501 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 11:59:34.428503   67501 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 11:59:34.428537   67501 kubeadm.go:309] 
	W0603 11:59:34.428641   67501 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-905554] and IPs [192.168.39.155 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 11:59:34.428689   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 11:59:36.013070   67501 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.584345913s)
	I0603 11:59:36.013147   67501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:59:36.028345   67501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 11:59:36.039965   67501 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 11:59:36.039984   67501 kubeadm.go:156] found existing configuration files:
	
	I0603 11:59:36.040030   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 11:59:36.049575   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 11:59:36.049629   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 11:59:36.058967   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 11:59:36.068249   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 11:59:36.068297   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 11:59:36.078170   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 11:59:36.089046   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 11:59:36.089104   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 11:59:36.100249   67501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 11:59:36.110787   67501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 11:59:36.110833   67501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 11:59:36.120643   67501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 11:59:36.188926   67501 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 11:59:36.188999   67501 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 11:59:36.319598   67501 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 11:59:36.319738   67501 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 11:59:36.319861   67501 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 11:59:36.505528   67501 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 11:59:36.507522   67501 out.go:204]   - Generating certificates and keys ...
	I0603 11:59:36.507623   67501 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 11:59:36.507698   67501 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 11:59:36.507814   67501 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 11:59:36.507920   67501 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 11:59:36.508004   67501 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 11:59:36.508064   67501 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 11:59:36.508161   67501 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 11:59:36.508226   67501 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 11:59:36.508288   67501 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 11:59:36.508361   67501 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 11:59:36.508395   67501 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 11:59:36.508442   67501 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 11:59:36.839296   67501 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 11:59:36.993903   67501 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 11:59:37.150612   67501 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 11:59:37.347133   67501 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 11:59:37.362936   67501 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 11:59:37.363083   67501 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 11:59:37.363123   67501 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 11:59:37.496535   67501 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 11:59:37.498434   67501 out.go:204]   - Booting up control plane ...
	I0603 11:59:37.498550   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 11:59:37.503866   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 11:59:37.505150   67501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 11:59:37.506382   67501 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 11:59:37.513487   67501 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:00:17.516189   67501 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:00:17.516758   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:00:17.516956   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:00:22.517753   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:00:22.517966   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:00:32.518023   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:00:32.518231   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:00:52.517530   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:00:52.517742   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:01:32.517949   67501 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:01:32.518231   67501 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:01:32.518256   67501 kubeadm.go:309] 
	I0603 12:01:32.518312   67501 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:01:32.518424   67501 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:01:32.518443   67501 kubeadm.go:309] 
	I0603 12:01:32.518490   67501 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:01:32.518542   67501 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:01:32.518683   67501 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:01:32.518694   67501 kubeadm.go:309] 
	I0603 12:01:32.518829   67501 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:01:32.518879   67501 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:01:32.518927   67501 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:01:32.518936   67501 kubeadm.go:309] 
	I0603 12:01:32.519089   67501 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:01:32.519204   67501 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:01:32.519216   67501 kubeadm.go:309] 
	I0603 12:01:32.519362   67501 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:01:32.519473   67501 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:01:32.519573   67501 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:01:32.519667   67501 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:01:32.519678   67501 kubeadm.go:309] 
	I0603 12:01:32.520284   67501 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:01:32.520390   67501 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:01:32.520494   67501 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:01:32.520552   67501 kubeadm.go:393] duration metric: took 3m57.712672742s to StartCluster
	I0603 12:01:32.520605   67501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:01:32.520654   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:01:32.567001   67501 cri.go:89] found id: ""
	I0603 12:01:32.567030   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.567063   67501 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:01:32.567071   67501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:01:32.567124   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:01:32.613701   67501 cri.go:89] found id: ""
	I0603 12:01:32.613725   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.613733   67501 logs.go:278] No container was found matching "etcd"
	I0603 12:01:32.613741   67501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:01:32.613798   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:01:32.649865   67501 cri.go:89] found id: ""
	I0603 12:01:32.649891   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.649897   67501 logs.go:278] No container was found matching "coredns"
	I0603 12:01:32.649903   67501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:01:32.649949   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:01:32.684188   67501 cri.go:89] found id: ""
	I0603 12:01:32.684221   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.684231   67501 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:01:32.684238   67501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:01:32.684287   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:01:32.723702   67501 cri.go:89] found id: ""
	I0603 12:01:32.723730   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.723741   67501 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:01:32.723747   67501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:01:32.723799   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:01:32.761870   67501 cri.go:89] found id: ""
	I0603 12:01:32.761889   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.761899   67501 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:01:32.761907   67501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:01:32.761957   67501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:01:32.800342   67501 cri.go:89] found id: ""
	I0603 12:01:32.800366   67501 logs.go:276] 0 containers: []
	W0603 12:01:32.800373   67501 logs.go:278] No container was found matching "kindnet"
	I0603 12:01:32.800388   67501 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:01:32.800398   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:01:32.899626   67501 logs.go:123] Gathering logs for container status ...
	I0603 12:01:32.899663   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:01:32.943691   67501 logs.go:123] Gathering logs for kubelet ...
	I0603 12:01:32.943720   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:01:32.992302   67501 logs.go:123] Gathering logs for dmesg ...
	I0603 12:01:32.992330   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:01:33.007971   67501 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:01:33.007995   67501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:01:33.144029   67501 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:01:33.144076   67501 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:01:33.144120   67501 out.go:239] * 
	* 
	W0603 12:01:33.144174   67501 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:01:33.144202   67501 out.go:239] * 
	* 
	W0603 12:01:33.145154   67501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:01:33.148206   67501 out.go:177] 
	W0603 12:01:33.149466   67501 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:01:33.149511   67501 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:01:33.149534   67501 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:01:33.150956   67501 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
E0603 12:01:33.293662   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 6 (224.281333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:33.414727   72677 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-905554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-725022 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-725022 --alsologtostderr -v=3: exit status 82 (2m0.498198734s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-725022"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:59:13.413961   71858 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:59:13.414110   71858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:13.414122   71858 out.go:304] Setting ErrFile to fd 2...
	I0603 11:59:13.414141   71858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:13.414444   71858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:59:13.414676   71858 out.go:298] Setting JSON to false
	I0603 11:59:13.414758   71858 mustload.go:65] Loading cluster: embed-certs-725022
	I0603 11:59:13.415166   71858 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:13.415272   71858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 11:59:13.415477   71858 mustload.go:65] Loading cluster: embed-certs-725022
	I0603 11:59:13.415632   71858 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:13.415666   71858 stop.go:39] StopHost: embed-certs-725022
	I0603 11:59:13.416217   71858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:59:13.416275   71858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:59:13.430967   71858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I0603 11:59:13.431891   71858 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:59:13.432595   71858 main.go:141] libmachine: Using API Version  1
	I0603 11:59:13.432625   71858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:59:13.432969   71858 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:59:13.436834   71858 out.go:177] * Stopping node "embed-certs-725022"  ...
	I0603 11:59:13.438166   71858 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:59:13.438215   71858 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 11:59:13.438452   71858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:59:13.438487   71858 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 11:59:13.441866   71858 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 11:59:13.442418   71858 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 11:59:13.442444   71858 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 11:59:13.442620   71858 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 11:59:13.442803   71858 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 11:59:13.442993   71858 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 11:59:13.443171   71858 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 11:59:13.545150   71858 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:59:13.606752   71858 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:59:13.665987   71858 main.go:141] libmachine: Stopping "embed-certs-725022"...
	I0603 11:59:13.666031   71858 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 11:59:13.667649   71858 main.go:141] libmachine: (embed-certs-725022) Calling .Stop
	I0603 11:59:13.671567   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 0/120
	I0603 11:59:14.673709   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 1/120
	I0603 11:59:15.675192   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 2/120
	I0603 11:59:16.676520   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 3/120
	I0603 11:59:17.678291   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 4/120
	I0603 11:59:18.679540   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 5/120
	I0603 11:59:19.681682   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 6/120
	I0603 11:59:20.683186   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 7/120
	I0603 11:59:21.684340   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 8/120
	I0603 11:59:22.686463   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 9/120
	I0603 11:59:23.688553   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 10/120
	I0603 11:59:24.690486   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 11/120
	I0603 11:59:25.691907   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 12/120
	I0603 11:59:26.693313   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 13/120
	I0603 11:59:27.694847   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 14/120
	I0603 11:59:28.696704   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 15/120
	I0603 11:59:29.698268   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 16/120
	I0603 11:59:30.700065   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 17/120
	I0603 11:59:31.701200   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 18/120
	I0603 11:59:32.702525   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 19/120
	I0603 11:59:33.704601   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 20/120
	I0603 11:59:34.706059   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 21/120
	I0603 11:59:35.707309   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 22/120
	I0603 11:59:36.709496   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 23/120
	I0603 11:59:37.710663   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 24/120
	I0603 11:59:38.712570   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 25/120
	I0603 11:59:39.714658   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 26/120
	I0603 11:59:40.716054   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 27/120
	I0603 11:59:41.717332   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 28/120
	I0603 11:59:42.718574   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 29/120
	I0603 11:59:43.720615   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 30/120
	I0603 11:59:44.722073   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 31/120
	I0603 11:59:45.723410   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 32/120
	I0603 11:59:46.724813   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 33/120
	I0603 11:59:47.726511   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 34/120
	I0603 11:59:48.728366   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 35/120
	I0603 11:59:49.729730   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 36/120
	I0603 11:59:50.731018   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 37/120
	I0603 11:59:51.732345   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 38/120
	I0603 11:59:52.733711   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 39/120
	I0603 11:59:53.735696   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 40/120
	I0603 11:59:54.736842   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 41/120
	I0603 11:59:55.737957   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 42/120
	I0603 11:59:56.739131   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 43/120
	I0603 11:59:57.740220   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 44/120
	I0603 11:59:58.742050   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 45/120
	I0603 11:59:59.743241   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 46/120
	I0603 12:00:00.745101   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 47/120
	I0603 12:00:01.746308   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 48/120
	I0603 12:00:02.747725   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 49/120
	I0603 12:00:03.749159   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 50/120
	I0603 12:00:04.750485   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 51/120
	I0603 12:00:05.751952   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 52/120
	I0603 12:00:06.753379   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 53/120
	I0603 12:00:07.754783   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 54/120
	I0603 12:00:08.756957   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 55/120
	I0603 12:00:09.758238   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 56/120
	I0603 12:00:10.759949   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 57/120
	I0603 12:00:11.761318   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 58/120
	I0603 12:00:12.762868   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 59/120
	I0603 12:00:13.765183   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 60/120
	I0603 12:00:14.766567   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 61/120
	I0603 12:00:15.768080   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 62/120
	I0603 12:00:16.769457   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 63/120
	I0603 12:00:17.770780   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 64/120
	I0603 12:00:18.772700   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 65/120
	I0603 12:00:19.774096   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 66/120
	I0603 12:00:20.775332   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 67/120
	I0603 12:00:21.776740   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 68/120
	I0603 12:00:22.778192   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 69/120
	I0603 12:00:23.780326   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 70/120
	I0603 12:00:24.781688   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 71/120
	I0603 12:00:25.783024   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 72/120
	I0603 12:00:26.784398   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 73/120
	I0603 12:00:27.785737   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 74/120
	I0603 12:00:28.787586   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 75/120
	I0603 12:00:29.788955   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 76/120
	I0603 12:00:30.790454   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 77/120
	I0603 12:00:31.791985   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 78/120
	I0603 12:00:32.793371   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 79/120
	I0603 12:00:33.795633   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 80/120
	I0603 12:00:34.796961   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 81/120
	I0603 12:00:35.798186   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 82/120
	I0603 12:00:36.799730   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 83/120
	I0603 12:00:37.801017   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 84/120
	I0603 12:00:38.802762   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 85/120
	I0603 12:00:39.804221   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 86/120
	I0603 12:00:40.805444   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 87/120
	I0603 12:00:41.806747   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 88/120
	I0603 12:00:42.808083   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 89/120
	I0603 12:00:43.810240   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 90/120
	I0603 12:00:44.811575   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 91/120
	I0603 12:00:45.812870   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 92/120
	I0603 12:00:46.814215   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 93/120
	I0603 12:00:47.815590   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 94/120
	I0603 12:00:48.817451   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 95/120
	I0603 12:00:49.818914   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 96/120
	I0603 12:00:50.820211   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 97/120
	I0603 12:00:51.821628   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 98/120
	I0603 12:00:52.822876   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 99/120
	I0603 12:00:53.825201   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 100/120
	I0603 12:00:54.826336   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 101/120
	I0603 12:00:55.827720   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 102/120
	I0603 12:00:56.829123   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 103/120
	I0603 12:00:57.830561   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 104/120
	I0603 12:00:58.832481   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 105/120
	I0603 12:00:59.833925   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 106/120
	I0603 12:01:00.835226   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 107/120
	I0603 12:01:01.836528   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 108/120
	I0603 12:01:02.838043   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 109/120
	I0603 12:01:03.840140   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 110/120
	I0603 12:01:04.841534   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 111/120
	I0603 12:01:05.842834   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 112/120
	I0603 12:01:06.844183   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 113/120
	I0603 12:01:07.845448   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 114/120
	I0603 12:01:08.847396   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 115/120
	I0603 12:01:09.848715   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 116/120
	I0603 12:01:10.850051   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 117/120
	I0603 12:01:11.851476   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 118/120
	I0603 12:01:12.852848   71858 main.go:141] libmachine: (embed-certs-725022) Waiting for machine to stop 119/120
	I0603 12:01:13.853883   71858 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:01:13.853954   71858 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 12:01:13.855678   71858 out.go:177] 
	W0603 12:01:13.857089   71858 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 12:01:13.857111   71858 out.go:239] * 
	* 
	W0603 12:01:13.859875   71858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:01:13.861190   71858 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-725022 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
E0603 12:01:19.286737   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:01:32.015685   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.020934   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.031179   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.051586   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.091836   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.172133   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.332540   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:01:32.432800   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022: exit status 3 (18.600679373s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:32.463350   72547 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host
	E0603 12:01:32.463370   72547 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-725022" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-602118 --alsologtostderr -v=3
E0603 11:59:34.177769   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.183110   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.193387   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.213664   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.253914   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.334284   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.494953   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:34.646162   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:34.815487   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:35.456511   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-602118 --alsologtostderr -v=3: exit status 82 (2m0.47471409s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-602118"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:59:33.996925   72080 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:59:33.997180   72080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:33.997189   72080 out.go:304] Setting ErrFile to fd 2...
	I0603 11:59:33.997193   72080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:33.997379   72080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:59:33.997630   72080 out.go:298] Setting JSON to false
	I0603 11:59:33.997704   72080 mustload.go:65] Loading cluster: no-preload-602118
	I0603 11:59:33.997999   72080 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:33.998098   72080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 11:59:33.998311   72080 mustload.go:65] Loading cluster: no-preload-602118
	I0603 11:59:33.998425   72080 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:33.998446   72080 stop.go:39] StopHost: no-preload-602118
	I0603 11:59:33.998900   72080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:59:33.998967   72080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:59:34.013918   72080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0603 11:59:34.014398   72080 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:59:34.015085   72080 main.go:141] libmachine: Using API Version  1
	I0603 11:59:34.015119   72080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:59:34.015528   72080 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:59:34.018069   72080 out.go:177] * Stopping node "no-preload-602118"  ...
	I0603 11:59:34.019633   72080 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:59:34.019659   72080 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 11:59:34.019883   72080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:59:34.019914   72080 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 11:59:34.023088   72080 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 11:59:34.023530   72080 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 12:57:42 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 11:59:34.023564   72080 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 11:59:34.023692   72080 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 11:59:34.023879   72080 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 11:59:34.024032   72080 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 11:59:34.024170   72080 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 11:59:34.110941   72080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:59:34.174486   72080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:59:34.232896   72080 main.go:141] libmachine: Stopping "no-preload-602118"...
	I0603 11:59:34.232931   72080 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 11:59:34.234684   72080 main.go:141] libmachine: (no-preload-602118) Calling .Stop
	I0603 11:59:34.238658   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 0/120
	I0603 11:59:35.239961   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 1/120
	I0603 11:59:36.241412   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 2/120
	I0603 11:59:37.242645   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 3/120
	I0603 11:59:38.243997   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 4/120
	I0603 11:59:39.245854   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 5/120
	I0603 11:59:40.247303   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 6/120
	I0603 11:59:41.248941   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 7/120
	I0603 11:59:42.250423   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 8/120
	I0603 11:59:43.251781   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 9/120
	I0603 11:59:44.254025   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 10/120
	I0603 11:59:45.255431   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 11/120
	I0603 11:59:46.256655   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 12/120
	I0603 11:59:47.257958   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 13/120
	I0603 11:59:48.259417   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 14/120
	I0603 11:59:49.261300   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 15/120
	I0603 11:59:50.262794   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 16/120
	I0603 11:59:51.264133   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 17/120
	I0603 11:59:52.265464   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 18/120
	I0603 11:59:53.266957   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 19/120
	I0603 11:59:54.269112   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 20/120
	I0603 11:59:55.270392   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 21/120
	I0603 11:59:56.271755   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 22/120
	I0603 11:59:57.272908   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 23/120
	I0603 11:59:58.274358   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 24/120
	I0603 11:59:59.276260   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 25/120
	I0603 12:00:00.277996   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 26/120
	I0603 12:00:01.279601   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 27/120
	I0603 12:00:02.280957   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 28/120
	I0603 12:00:03.282339   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 29/120
	I0603 12:00:04.284494   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 30/120
	I0603 12:00:05.286046   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 31/120
	I0603 12:00:06.287640   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 32/120
	I0603 12:00:07.289688   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 33/120
	I0603 12:00:08.291077   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 34/120
	I0603 12:00:09.293522   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 35/120
	I0603 12:00:10.294938   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 36/120
	I0603 12:00:11.296358   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 37/120
	I0603 12:00:12.298041   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 38/120
	I0603 12:00:13.299555   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 39/120
	I0603 12:00:14.300846   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 40/120
	I0603 12:00:15.302246   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 41/120
	I0603 12:00:16.303777   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 42/120
	I0603 12:00:17.305426   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 43/120
	I0603 12:00:18.306915   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 44/120
	I0603 12:00:19.309015   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 45/120
	I0603 12:00:20.310623   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 46/120
	I0603 12:00:21.312190   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 47/120
	I0603 12:00:22.314021   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 48/120
	I0603 12:00:23.315551   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 49/120
	I0603 12:00:24.317666   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 50/120
	I0603 12:00:25.319176   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 51/120
	I0603 12:00:26.320563   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 52/120
	I0603 12:00:27.321968   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 53/120
	I0603 12:00:28.323438   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 54/120
	I0603 12:00:29.325349   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 55/120
	I0603 12:00:30.326752   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 56/120
	I0603 12:00:31.328263   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 57/120
	I0603 12:00:32.329552   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 58/120
	I0603 12:00:33.330958   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 59/120
	I0603 12:00:34.333153   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 60/120
	I0603 12:00:35.334624   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 61/120
	I0603 12:00:36.336077   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 62/120
	I0603 12:00:37.337412   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 63/120
	I0603 12:00:38.338815   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 64/120
	I0603 12:00:39.340674   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 65/120
	I0603 12:00:40.342037   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 66/120
	I0603 12:00:41.343650   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 67/120
	I0603 12:00:42.345135   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 68/120
	I0603 12:00:43.346639   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 69/120
	I0603 12:00:44.348675   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 70/120
	I0603 12:00:45.350125   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 71/120
	I0603 12:00:46.351615   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 72/120
	I0603 12:00:47.353039   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 73/120
	I0603 12:00:48.354348   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 74/120
	I0603 12:00:49.356225   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 75/120
	I0603 12:00:50.357665   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 76/120
	I0603 12:00:51.359029   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 77/120
	I0603 12:00:52.360453   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 78/120
	I0603 12:00:53.361849   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 79/120
	I0603 12:00:54.363996   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 80/120
	I0603 12:00:55.365383   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 81/120
	I0603 12:00:56.366841   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 82/120
	I0603 12:00:57.368136   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 83/120
	I0603 12:00:58.369504   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 84/120
	I0603 12:00:59.371496   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 85/120
	I0603 12:01:00.372787   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 86/120
	I0603 12:01:01.374056   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 87/120
	I0603 12:01:02.375415   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 88/120
	I0603 12:01:03.376862   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 89/120
	I0603 12:01:04.379079   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 90/120
	I0603 12:01:05.380757   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 91/120
	I0603 12:01:06.382077   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 92/120
	I0603 12:01:07.383548   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 93/120
	I0603 12:01:08.385221   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 94/120
	I0603 12:01:09.387224   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 95/120
	I0603 12:01:10.388853   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 96/120
	I0603 12:01:11.390150   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 97/120
	I0603 12:01:12.391878   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 98/120
	I0603 12:01:13.393400   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 99/120
	I0603 12:01:14.395487   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 100/120
	I0603 12:01:15.397208   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 101/120
	I0603 12:01:16.398635   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 102/120
	I0603 12:01:17.400141   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 103/120
	I0603 12:01:18.401548   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 104/120
	I0603 12:01:19.403478   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 105/120
	I0603 12:01:20.404917   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 106/120
	I0603 12:01:21.406428   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 107/120
	I0603 12:01:22.407824   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 108/120
	I0603 12:01:23.409246   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 109/120
	I0603 12:01:24.411222   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 110/120
	I0603 12:01:25.412527   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 111/120
	I0603 12:01:26.413901   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 112/120
	I0603 12:01:27.415497   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 113/120
	I0603 12:01:28.416761   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 114/120
	I0603 12:01:29.418512   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 115/120
	I0603 12:01:30.420792   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 116/120
	I0603 12:01:31.422009   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 117/120
	I0603 12:01:32.423549   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 118/120
	I0603 12:01:33.424596   72080 main.go:141] libmachine: (no-preload-602118) Waiting for machine to stop 119/120
	I0603 12:01:34.425032   72080 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:01:34.425110   72080 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 12:01:34.427102   72080 out.go:177] 
	W0603 12:01:34.428499   72080 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 12:01:34.428517   72080 out.go:239] * 
	* 
	W0603 12:01:34.430959   72080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:01:34.432093   72080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-602118 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
E0603 12:01:34.574683   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118: exit status 3 (18.510289487s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:52.943448   72822 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0603 12:01:52.943469   72822 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-602118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-196710 --alsologtostderr -v=3
E0603 11:59:44.417656   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:44.886597   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:54.658284   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:00:05.367483   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:00:10.512096   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:00:15.139460   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:00:19.213799   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 12:00:38.324875   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.330200   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.341092   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.361371   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.401676   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.481996   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.642283   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:38.963191   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:39.604359   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:40.884787   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:43.444899   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:46.327744   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:00:48.565125   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:00:56.100238   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:00:58.805975   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-196710 --alsologtostderr -v=3: exit status 82 (2m0.488114862s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-196710"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:59:43.161136   72180 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:59:43.161357   72180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:43.161369   72180 out.go:304] Setting ErrFile to fd 2...
	I0603 11:59:43.161373   72180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:59:43.161619   72180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:59:43.161883   72180 out.go:298] Setting JSON to false
	I0603 11:59:43.161976   72180 mustload.go:65] Loading cluster: default-k8s-diff-port-196710
	I0603 11:59:43.162316   72180 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:43.162398   72180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 11:59:43.162563   72180 mustload.go:65] Loading cluster: default-k8s-diff-port-196710
	I0603 11:59:43.162702   72180 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:59:43.162744   72180 stop.go:39] StopHost: default-k8s-diff-port-196710
	I0603 11:59:43.163161   72180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:59:43.163210   72180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:59:43.181137   72180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0603 11:59:43.181577   72180 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:59:43.182203   72180 main.go:141] libmachine: Using API Version  1
	I0603 11:59:43.182227   72180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:59:43.182621   72180 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:59:43.184827   72180 out.go:177] * Stopping node "default-k8s-diff-port-196710"  ...
	I0603 11:59:43.186179   72180 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 11:59:43.186219   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 11:59:43.186452   72180 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 11:59:43.186482   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 11:59:43.189811   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 11:59:43.190311   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 11:59:43.190339   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 11:59:43.190527   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 11:59:43.190691   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 11:59:43.190845   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 11:59:43.191034   72180 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 11:59:43.293962   72180 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 11:59:43.350428   72180 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 11:59:43.407905   72180 main.go:141] libmachine: Stopping "default-k8s-diff-port-196710"...
	I0603 11:59:43.407937   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 11:59:43.409482   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Stop
	I0603 11:59:43.413024   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 0/120
	I0603 11:59:44.414424   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 1/120
	I0603 11:59:45.415762   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 2/120
	I0603 11:59:46.417470   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 3/120
	I0603 11:59:47.418724   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 4/120
	I0603 11:59:48.420764   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 5/120
	I0603 11:59:49.422130   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 6/120
	I0603 11:59:50.423552   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 7/120
	I0603 11:59:51.424798   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 8/120
	I0603 11:59:52.426205   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 9/120
	I0603 11:59:53.428486   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 10/120
	I0603 11:59:54.429745   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 11/120
	I0603 11:59:55.431443   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 12/120
	I0603 11:59:56.433612   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 13/120
	I0603 11:59:57.435476   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 14/120
	I0603 11:59:58.437507   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 15/120
	I0603 11:59:59.439026   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 16/120
	I0603 12:00:00.440612   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 17/120
	I0603 12:00:01.442170   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 18/120
	I0603 12:00:02.443540   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 19/120
	I0603 12:00:03.445502   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 20/120
	I0603 12:00:04.446841   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 21/120
	I0603 12:00:05.448327   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 22/120
	I0603 12:00:06.449908   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 23/120
	I0603 12:00:07.451417   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 24/120
	I0603 12:00:08.453654   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 25/120
	I0603 12:00:09.455144   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 26/120
	I0603 12:00:10.456665   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 27/120
	I0603 12:00:11.458247   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 28/120
	I0603 12:00:12.459922   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 29/120
	I0603 12:00:13.462306   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 30/120
	I0603 12:00:14.463934   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 31/120
	I0603 12:00:15.465455   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 32/120
	I0603 12:00:16.467019   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 33/120
	I0603 12:00:17.468828   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 34/120
	I0603 12:00:18.471300   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 35/120
	I0603 12:00:19.472743   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 36/120
	I0603 12:00:20.474125   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 37/120
	I0603 12:00:21.475538   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 38/120
	I0603 12:00:22.476934   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 39/120
	I0603 12:00:23.479375   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 40/120
	I0603 12:00:24.480844   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 41/120
	I0603 12:00:25.482292   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 42/120
	I0603 12:00:26.483776   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 43/120
	I0603 12:00:27.485281   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 44/120
	I0603 12:00:28.487328   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 45/120
	I0603 12:00:29.489431   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 46/120
	I0603 12:00:30.490926   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 47/120
	I0603 12:00:31.492411   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 48/120
	I0603 12:00:32.493883   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 49/120
	I0603 12:00:33.496136   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 50/120
	I0603 12:00:34.497340   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 51/120
	I0603 12:00:35.498802   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 52/120
	I0603 12:00:36.500175   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 53/120
	I0603 12:00:37.501630   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 54/120
	I0603 12:00:38.503660   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 55/120
	I0603 12:00:39.505015   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 56/120
	I0603 12:00:40.506595   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 57/120
	I0603 12:00:41.507918   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 58/120
	I0603 12:00:42.509347   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 59/120
	I0603 12:00:43.511551   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 60/120
	I0603 12:00:44.513502   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 61/120
	I0603 12:00:45.514766   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 62/120
	I0603 12:00:46.516490   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 63/120
	I0603 12:00:47.517993   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 64/120
	I0603 12:00:48.519859   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 65/120
	I0603 12:00:49.521516   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 66/120
	I0603 12:00:50.522865   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 67/120
	I0603 12:00:51.524145   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 68/120
	I0603 12:00:52.525509   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 69/120
	I0603 12:00:53.527847   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 70/120
	I0603 12:00:54.529239   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 71/120
	I0603 12:00:55.530518   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 72/120
	I0603 12:00:56.531943   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 73/120
	I0603 12:00:57.533328   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 74/120
	I0603 12:00:58.535158   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 75/120
	I0603 12:00:59.536478   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 76/120
	I0603 12:01:00.537924   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 77/120
	I0603 12:01:01.539497   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 78/120
	I0603 12:01:02.540737   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 79/120
	I0603 12:01:03.542946   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 80/120
	I0603 12:01:04.544304   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 81/120
	I0603 12:01:05.545699   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 82/120
	I0603 12:01:06.547073   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 83/120
	I0603 12:01:07.548504   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 84/120
	I0603 12:01:08.550695   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 85/120
	I0603 12:01:09.552081   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 86/120
	I0603 12:01:10.553570   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 87/120
	I0603 12:01:11.554903   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 88/120
	I0603 12:01:12.556338   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 89/120
	I0603 12:01:13.558695   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 90/120
	I0603 12:01:14.560333   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 91/120
	I0603 12:01:15.561660   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 92/120
	I0603 12:01:16.563081   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 93/120
	I0603 12:01:17.564340   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 94/120
	I0603 12:01:18.566296   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 95/120
	I0603 12:01:19.567631   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 96/120
	I0603 12:01:20.569004   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 97/120
	I0603 12:01:21.570242   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 98/120
	I0603 12:01:22.571510   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 99/120
	I0603 12:01:23.573655   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 100/120
	I0603 12:01:24.575205   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 101/120
	I0603 12:01:25.576527   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 102/120
	I0603 12:01:26.578502   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 103/120
	I0603 12:01:27.579764   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 104/120
	I0603 12:01:28.581609   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 105/120
	I0603 12:01:29.582884   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 106/120
	I0603 12:01:30.584330   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 107/120
	I0603 12:01:31.585892   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 108/120
	I0603 12:01:32.587351   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 109/120
	I0603 12:01:33.588476   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 110/120
	I0603 12:01:34.589602   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 111/120
	I0603 12:01:35.590882   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 112/120
	I0603 12:01:36.592282   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 113/120
	I0603 12:01:37.593503   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 114/120
	I0603 12:01:38.595590   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 115/120
	I0603 12:01:39.596896   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 116/120
	I0603 12:01:40.598363   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 117/120
	I0603 12:01:41.600032   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 118/120
	I0603 12:01:42.601388   72180 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for machine to stop 119/120
	I0603 12:01:43.602108   72180 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:01:43.602156   72180 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 12:01:43.604321   72180 out.go:177] 
	W0603 12:01:43.605796   72180 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 12:01:43.605819   72180 out.go:239] * 
	* 
	W0603 12:01:43.608464   72180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:01:43.609655   72180 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-196710 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710: exit status 3 (18.548591531s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:02:02.159406   72916 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host
	E0603 12:02:02.159427   72916 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-196710" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
E0603 12:01:32.653161   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022: exit status 3 (3.167523669s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:35.631435   72645 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host
	E0603 12:01:35.631456   72645 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-725022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0603 12:01:37.135002   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-725022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15238633s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-725022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
E0603 12:01:42.256069   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022: exit status 3 (3.067533787s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:44.851411   72886 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host
	E0603 12:01:44.851436   72886 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-725022" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-905554 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-905554 create -f testdata/busybox.yaml: exit status 1 (43.385346ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-905554" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-905554 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 6 (214.113661ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:33.674137   72716 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-905554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 6 (220.832768ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:33.893572   72746 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-905554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-905554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-905554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m23.26137986s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-905554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-905554 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-905554 describe deploy/metrics-server -n kube-system: exit status 1 (41.821373ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-905554" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-905554 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
E0603 12:02:57.373928   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 6 (217.846189ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:02:57.415420   73535 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-905554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118: exit status 3 (3.167758904s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:01:56.111425   73021 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0603 12:01:56.111459   73021 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-602118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0603 12:01:59.127978   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.134085   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.144349   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.164571   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.204818   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.285213   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.445587   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:01:59.766187   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:02:00.247733   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:02:00.407096   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:02:01.687802   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-602118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1525062s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-602118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
E0603 12:02:04.248850   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118: exit status 3 (3.0634872s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:02:05.327402   73132 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0603 12:02:05.327427   73132 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-602118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710: exit status 3 (3.16796626s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:02:05.327424   73102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host
	E0603 12:02:05.327440   73102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-196710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-196710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152106912s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-196710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
E0603 12:02:12.038059   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 12:02:12.977287   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710: exit status 3 (3.063228589s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:02:14.543347   73248 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host
	E0603 12:02:14.543377   73248 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.60:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-196710" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (753.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0603 12:03:05.054954   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:03:15.296130   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:03:21.050997   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:03:22.168899   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:03:35.776834   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:03:48.590411   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:04:15.859203   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:04:16.274006   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:04:16.737610   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:04:24.404711   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:04:34.178076   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:04:42.972181   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:04:52.090281   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:05:01.862572   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:05:19.213191   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 12:05:38.325461   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:05:38.658191   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:06:06.009638   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:06:32.014763   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:06:42.261084   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 12:06:59.128513   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:06:59.699521   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:07:12.037971   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 12:07:26.813022   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:07:54.815498   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:08:22.498742   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:08:48.589976   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:09:24.405183   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:09:34.177424   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:10:19.213083   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 12:10:38.324041   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
E0603 12:11:32.015406   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:11:59.128749   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:12:12.037478   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m29.531702379s)

                                                
                                                
-- stdout --
	* [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	* 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	* 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-905554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (230.007642ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25: (1.568596564s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.396047239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717416931396026382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7384828a-47a9-4944-bb25-b79c4ce50918 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.396706597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=079f398c-439c-40f8-835e-6bf4b489c96a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.396773907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=079f398c-439c-40f8-835e-6bf4b489c96a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.396808977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=079f398c-439c-40f8-835e-6bf4b489c96a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.430110074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b98b49d-d71d-4e48-9c97-6f3f6fbdc13d name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.430241294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b98b49d-d71d-4e48-9c97-6f3f6fbdc13d name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.438999868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21d96298-8cd0-439e-85c3-2ba52ad929c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.439477975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717416931439449902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21d96298-8cd0-439e-85c3-2ba52ad929c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.440082410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57b2ae12-7f54-4e46-a072-3b3c96bf5a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.440130952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57b2ae12-7f54-4e46-a072-3b3c96bf5a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.440167908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=57b2ae12-7f54-4e46-a072-3b3c96bf5a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.475237773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc019e58-4217-4207-9936-319b5eb9f320 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.475344705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc019e58-4217-4207-9936-319b5eb9f320 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.476800152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ad3163e-2a24-4d0f-bc3b-bbf8264a5fed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.477261538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717416931477176829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ad3163e-2a24-4d0f-bc3b-bbf8264a5fed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.477837927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6980ea6a-fe30-4f2e-8cc9-d0ec75d539e6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.477905684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6980ea6a-fe30-4f2e-8cc9-d0ec75d539e6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.477953075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6980ea6a-fe30-4f2e-8cc9-d0ec75d539e6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.513751190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a1d6dbf-352c-40ac-9ca6-fcd5f9eac35f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.513883061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a1d6dbf-352c-40ac-9ca6-fcd5f9eac35f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.515045347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1954a769-f85f-4384-bc42-f0465a13a844 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.515567087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717416931515543929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1954a769-f85f-4384-bc42-f0465a13a844 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.517668185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aec4337-a4d0-43c0-9b90-4a907b692edf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.517767356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aec4337-a4d0-43c0-9b90-4a907b692edf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:15:31 old-k8s-version-905554 crio[644]: time="2024-06-03 12:15:31.517807113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3aec4337-a4d0-43c0-9b90-4a907b692edf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067618] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.055262] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.836862] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470521] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.722215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.941743] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.062404] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063439] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.196803] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.150293] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.306355] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.730916] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.064798] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.734692] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.126331] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 12:11] systemd-fstab-generator[5043]: Ignoring "noauto" option for root device
	[Jun 3 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.071630] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:15:31 up 8 min,  0 users,  load average: 0.07, 0.13, 0.08
	Linux old-k8s-version-905554 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000c1d0e0, 0x48ab5d6, 0x3, 0xc000bd14a0, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c1d0e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bd14a0, 0x24, 0x0, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net.(*Dialer).DialContext(0xc000ac8300, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bd14a0, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000acb260, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bd14a0, 0x24, 0x60, 0x7f6912cee820, 0x118, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net/http.(*Transport).dial(0xc000359900, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bd14a0, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net/http.(*Transport).dialConn(0xc000359900, 0x4f7fe00, 0xc000120018, 0x0, 0xc000bf1320, 0x5, 0xc000bd14a0, 0x24, 0x0, 0xc000bf2a20, ...)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: net/http.(*Transport).dialConnFor(0xc000359900, 0xc000b89d90)
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]: created by net/http.(*Transport).queueForDial
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jun 03 12:15:29 old-k8s-version-905554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 03 12:15:29 old-k8s-version-905554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 12:15:29 old-k8s-version-905554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5564]: I0603 12:15:29.872859    5564 server.go:416] Version: v1.20.0
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5564]: I0603 12:15:29.873268    5564 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5564]: I0603 12:15:29.875324    5564 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5564]: W0603 12:15:29.876934    5564 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 03 12:15:29 old-k8s-version-905554 kubelet[5564]: I0603 12:15:29.877007    5564 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (236.83801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-905554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (753.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-602118 -n no-preload-602118
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:21:20.347446838 +0000 UTC m=+6180.251866714
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-602118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-602118 logs -n 25: (2.479666607s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.211448633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f085857-667f-4a33-8d8a-2339c71371ed name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.212534201Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2204ccc9-b4f8-4e40-99dd-2e4cacaf2ecb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.212783873Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ba8b8507812c9b83159e4e442a4862fd3c73a0fc2cef5faa48b918d601b91794,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-zpzbw,Uid:b28cb265-532b-41ea-a242-001a85174a35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416738162377849,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-zpzbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28cb265-532b-41ea-a242-001a85174a35,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:12:17.832780895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416737718284917,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T12:12:17.406403747Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-dwptw,Uid:7a0437fe-8e83-4acc-a92a-af29bf06db93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416737080982399,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:12:16.758908331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gmj5,Uid:474da426-9414-4a30-
8b19-14e555e192de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416737027959604,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474da426-9414-4a30-8b19-14e555e192de,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:12:16.703713600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&PodSandboxMetadata{Name:kube-proxy-tfxkl,Uid:d6502635-478f-443c-8186-ab0616fcf4ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416736906317332,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:12:16.578134622Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-602118,Uid:17345709021d24cb267b0ce4add83645,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416717223913298,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17345709021d24cb267b0ce4add83645,kubernetes.io/config.seen: 2024-06-03T12:11:56.773544311Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no
-preload-602118,Uid:11c3fa6ec0cc81f29fe8e779d24c5099,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416717220519179,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.245:8443,kubernetes.io/config.hash: 11c3fa6ec0cc81f29fe8e779d24c5099,kubernetes.io/config.seen: 2024-06-03T12:11:56.773542139Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-602118,Uid:a568811ec88d614b45e242281e5693a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416717218054716,Labels:map[string]string{component: kube-controller-manager,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a568811ec88d614b45e242281e5693a1,kubernetes.io/config.seen: 2024-06-03T12:11:56.773543213Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-602118,Uid:ae3562eee63d85017986173f61212ec0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416717217458727,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.245:237
9,kubernetes.io/config.hash: ae3562eee63d85017986173f61212ec0,kubernetes.io/config.seen: 2024-06-03T12:11:56.773538398Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2204ccc9-b4f8-4e40-99dd-2e4cacaf2ecb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.213674696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=762a83be-3a18-4e1c-83c6-0bef28cc3d21 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.213751668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=762a83be-3a18-4e1c-83c6-0bef28cc3d21 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.214469289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=762a83be-3a18-4e1c-83c6-0bef28cc3d21 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.215629310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4046247d-4be8-48eb-961c-ca60d6a41c43 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.216062972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282216043468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4046247d-4be8-48eb-961c-ca60d6a41c43 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.216573376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2beeaf78-1010-40f2-b977-20cf9c917301 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.216623055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2beeaf78-1010-40f2-b977-20cf9c917301 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.216800035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2beeaf78-1010-40f2-b977-20cf9c917301 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.270145857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3f49ac1-4c05-4672-bd2a-9e934f5bf158 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.270258848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3f49ac1-4c05-4672-bd2a-9e934f5bf158 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.271608700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d2cfd3f-7d6b-49dd-ae39-8339588112bc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.272059645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282272036390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d2cfd3f-7d6b-49dd-ae39-8339588112bc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.273201709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d10dcff7-480a-43ca-812e-8f2d158281ee name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.273306187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d10dcff7-480a-43ca-812e-8f2d158281ee name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.273741670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d10dcff7-480a-43ca-812e-8f2d158281ee name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.326658168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe28484c-8a73-4d6e-b4b6-57739d5b1a6b name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.326737076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe28484c-8a73-4d6e-b4b6-57739d5b1a6b name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.328679432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52272d39-6540-4da8-bbe9-71dbd977b543 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.329433282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282329376801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52272d39-6540-4da8-bbe9-71dbd977b543 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.330412819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f42e24b-7f2d-4395-8d33-63f8d38949a3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.330495927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f42e24b-7f2d-4395-8d33-63f8d38949a3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 no-preload-602118 crio[725]: time="2024-06-03 12:21:22.330793449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f42e24b-7f2d-4395-8d33-63f8d38949a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9816663d6329       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0ca5bf52da273       coredns-7db6d8ff4d-dwptw
	fb248b003c861       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2f9f6e560130a       storage-provisioner
	584c23eaff7fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   220fbd721c902       coredns-7db6d8ff4d-5gmj5
	0f95b604096bb       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   036d89d7ad7f4       kube-proxy-tfxkl
	998c79f6f292c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   86c744cb98f88       etcd-no-preload-602118
	f7a1aa13e70aa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   a448b605ab5ec       kube-scheduler-no-preload-602118
	1d6486b810f4f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   2c6440b78a8dd       kube-controller-manager-no-preload-602118
	6e010bfa69d81       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   769d1926d74f4       kube-apiserver-no-preload-602118
	
	
	==> coredns [584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-602118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-602118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=no-preload-602118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-602118
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:21:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.245
	  Hostname:    no-preload-602118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e98a529f012d4a0988904e7d0cb7a70c
	  System UUID:                e98a529f-012d-4a09-8890-4e7d0cb7a70c
	  Boot ID:                    8ea9d02b-256a-4d4f-a148-b6b987af69da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5gmj5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-dwptw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-602118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-602118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-602118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-tfxkl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-602118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-zpzbw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s  kubelet          Node no-preload-602118 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-602118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-602118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-602118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-602118 event: Registered Node no-preload-602118 in Controller
	
	
	==> dmesg <==
	[  +0.040299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.485068] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.368632] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.590241] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.274292] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.054295] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059499] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.178092] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.112663] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.269422] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.624695] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.064440] kauditd_printk_skb: 130 callbacks suppressed
	[Jun 3 12:07] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +5.645798] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.569499] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.463426] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 12:11] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.963399] systemd-fstab-generator[4013]: Ignoring "noauto" option for root device
	[Jun 3 12:12] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.851515] systemd-fstab-generator[4341]: Ignoring "noauto" option for root device
	[ +13.399552] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	[  +0.100564] kauditd_printk_skb: 14 callbacks suppressed
	[Jun 3 12:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070] <==
	{"level":"info","ts":"2024-06-03T12:11:57.940615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 switched to configuration voters=(9405602029447433462)"}
	{"level":"info","ts":"2024-06-03T12:11:57.94592Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","added-peer-id":"8287693677e84cf6","added-peer-peer-urls":["https://192.168.50.245:2380"]}
	{"level":"info","ts":"2024-06-03T12:11:57.969902Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T12:11:57.970204Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.245:2380"}
	{"level":"info","ts":"2024-06-03T12:11:57.970242Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.245:2380"}
	{"level":"info","ts":"2024-06-03T12:11:57.970377Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8287693677e84cf6","initial-advertise-peer-urls":["https://192.168.50.245:2380"],"listen-peer-urls":["https://192.168.50.245:2380"],"advertise-client-urls":["https://192.168.50.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T12:11:57.970414Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:11:58.894643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgPreVoteResp from 8287693677e84cf6 at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.894722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgVoteResp from 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.89473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became leader at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.894737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8287693677e84cf6 elected leader 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.899076Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.901009Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8287693677e84cf6","local-member-attributes":"{Name:no-preload-602118 ClientURLs:[https://192.168.50.245:2379]}","request-path":"/0/members/8287693677e84cf6/attributes","cluster-id":"6e727aea1cd049c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:11:58.90138Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.901812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.902012Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902078Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902124Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.902209Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.903723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:11:58.9089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.245:2379"}
	
	
	==> kernel <==
	 12:21:22 up 14 min,  0 users,  load average: 0.21, 0.28, 0.16
	Linux no-preload-602118 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0] <==
	I0603 12:15:18.746645       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:17:00.297605       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:00.297964       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:17:01.299032       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:01.299137       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:17:01.299186       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:17:01.299069       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:01.299295       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:17:01.300231       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:01.299390       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:01.299621       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:18:01.299651       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:01.300737       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:01.300799       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:18:01.300807       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:01.300458       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:01.300551       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:20:01.300562       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:01.301760       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:01.301905       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:20:01.301913       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9] <==
	I0603 12:15:46.468315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:16:16.028258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:16:16.476638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:16:46.033813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:16:46.485068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:16.040422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:16.493616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:46.045429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:46.501105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:17:54.109213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="493.459µs"
	I0603 12:18:09.112380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="156.558µs"
	E0603 12:18:16.052318       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:16.509186       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:18:46.058184       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:46.517541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:19:16.062946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:16.526029       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:19:46.069554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:46.534945       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:16.074896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:16.543191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:46.080291       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:46.551773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:21:16.085595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:21:16.562289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf] <==
	I0603 12:12:17.965944       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:12:18.092647       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.245"]
	I0603 12:12:18.460283       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:12:18.460331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:12:18.460355       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:12:18.469374       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:12:18.469630       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:12:18.469664       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:12:18.476306       1 config.go:192] "Starting service config controller"
	I0603 12:12:18.476345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:12:18.476366       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:12:18.476369       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:12:18.476672       1 config.go:319] "Starting node config controller"
	I0603 12:12:18.476704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:12:18.580162       1 shared_informer.go:320] Caches are synced for node config
	I0603 12:12:18.580247       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:12:18.580298       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd] <==
	E0603 12:12:00.331865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.331876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:00.331883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:00.331890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:00.331956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:00.331962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:00.332084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:01.156895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:12:01.156944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:12:01.211455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:01.211506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:01.285806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:01.286304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:01.313084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:01.313111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:12:01.398965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:01.399016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:01.424856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:01.424903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:01.742109       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:01.742146       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:04.620012       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:19:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:19:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:19:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:19:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:19:04 no-preload-602118 kubelet[4348]: E0603 12:19:04.092384    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:19:19 no-preload-602118 kubelet[4348]: E0603 12:19:19.092414    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:19:34 no-preload-602118 kubelet[4348]: E0603 12:19:34.092743    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:19:49 no-preload-602118 kubelet[4348]: E0603 12:19:49.092898    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:20:01 no-preload-602118 kubelet[4348]: E0603 12:20:01.094090    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:20:03 no-preload-602118 kubelet[4348]: E0603 12:20:03.148616    4348 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:20:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:20:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:20:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:20:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:20:13 no-preload-602118 kubelet[4348]: E0603 12:20:13.093648    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:20:25 no-preload-602118 kubelet[4348]: E0603 12:20:25.094075    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:20:37 no-preload-602118 kubelet[4348]: E0603 12:20:37.092116    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:20:50 no-preload-602118 kubelet[4348]: E0603 12:20:50.092486    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:21:01 no-preload-602118 kubelet[4348]: E0603 12:21:01.092294    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:21:03 no-preload-602118 kubelet[4348]: E0603 12:21:03.148724    4348 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:21:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:21:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:21:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:21:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:21:12 no-preload-602118 kubelet[4348]: E0603 12:21:12.091973    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	
	
	==> storage-provisioner [fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a] <==
	I0603 12:12:18.548114       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:12:18.568714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:12:18.568811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:12:18.592471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:12:18.592747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d!
	I0603 12:12:18.593291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ea1aae2-6280-4f10-a8c2-e37d926441ba", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d became leader
	I0603 12:12:18.693799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-602118 -n no-preload-602118
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-602118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zpzbw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw: exit status 1 (84.028948ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zpzbw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (546.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0603 12:12:54.814520   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:21:20.93438284 +0000 UTC m=+6180.838802931
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-196710 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-196710 logs -n 25: (2.427335727s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.686946026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282686915698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaf00e62-0039-4fa1-8c65-70bb70f52564 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.687852185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eadbef8c-a747-47e7-a1b1-4c68081781fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.687907083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eadbef8c-a747-47e7-a1b1-4c68081781fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.688156280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eadbef8c-a747-47e7-a1b1-4c68081781fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.737148721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7331cdf2-c0e0-470d-9d71-ca43d14bb4e9 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.737330752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7331cdf2-c0e0-470d-9d71-ca43d14bb4e9 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.739539300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43d397ba-39ee-4359-b309-2f9b3ed934fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.740077970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282740051578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43d397ba-39ee-4359-b309-2f9b3ed934fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.741373622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54f8d221-4b2b-4ff0-9862-00e3923427c2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.741495398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54f8d221-4b2b-4ff0-9862-00e3923427c2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.741984582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54f8d221-4b2b-4ff0-9862-00e3923427c2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.783005736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d127e93d-4b64-42e7-a45d-63791ffe9a4e name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.783125722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d127e93d-4b64-42e7-a45d-63791ffe9a4e name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.785167744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce162d94-3df0-45c1-84b6-522d441c8e66 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.785567855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282785547827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce162d94-3df0-45c1-84b6-522d441c8e66 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.786511840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ad6ac48-5cad-42e0-967c-c3eb8b0dfbab name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.786572645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ad6ac48-5cad-42e0-967c-c3eb8b0dfbab name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.786805316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ad6ac48-5cad-42e0-967c-c3eb8b0dfbab name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.825313111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b0a8aaa-8d47-4edb-879a-da147b6fa6d3 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.825386042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b0a8aaa-8d47-4edb-879a-da147b6fa6d3 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.826480856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce2a9945-d29b-47cf-b390-6e965f91638d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.826978092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417282826956110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce2a9945-d29b-47cf-b390-6e965f91638d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.827830385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee1973f2-1977-425a-9877-0f93ee316e55 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.827883393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee1973f2-1977-425a-9877-0f93ee316e55 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:21:22 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:21:22.828325055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee1973f2-1977-425a-9877-0f93ee316e55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f837113d05b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   bb07783cf2f01       storage-provisioner
	38f81b9ecc23e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   969b22e069ac9       coredns-7db6d8ff4d-pbndv
	55f5cd73dfa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   14cc90803a079       coredns-7db6d8ff4d-fvgqr
	ba8f260f4f147       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   a2764dd88f0a0       kube-proxy-j4gzg
	dde9542f2848b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   5991627e3ce5c       kube-controller-manager-default-k8s-diff-port-196710
	5b36790864877       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   92c573d6e4e92       kube-apiserver-default-k8s-diff-port-196710
	b97df11bb5da4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   0da5999e29690       etcd-default-k8s-diff-port-196710
	458d45e7061f1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   1294592d2da1d       kube-scheduler-default-k8s-diff-port-196710
	
	
	==> coredns [38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-196710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-196710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=default-k8s-diff-port-196710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-196710
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:21:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:17:28 +0000   Mon, 03 Jun 2024 12:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.60
	  Hostname:    default-k8s-diff-port-196710
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 30cc90e3d4ba4851bf3941aebea98abf
	  System UUID:                30cc90e3-d4ba-4851-bf39-41aebea98abf
	  Boot ID:                    8d17ce40-dc25-4e83-ab19-730863a4a2c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fvgqr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-pbndv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-196710                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-196710             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-196710    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-j4gzg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-196710             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-lxvbp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-196710 event: Registered Node default-k8s-diff-port-196710 in Controller
	
	
	==> dmesg <==
	[  +0.045057] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.367168] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:07] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.056268] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071079] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.181169] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.160826] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.331192] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.611511] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.061098] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.655618] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +5.704467] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.225727] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.951922] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 3 12:11] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.474182] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[Jun 3 12:12] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.638418] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[ +14.408380] systemd-fstab-generator[4101]: Ignoring "noauto" option for root device
	[  +0.099724] kauditd_printk_skb: 14 callbacks suppressed
	[Jun 3 12:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f] <==
	{"level":"info","ts":"2024-06-03T12:11:57.841118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 switched to configuration voters=(6435886852983201783)"}
	{"level":"info","ts":"2024-06-03T12:11:57.842888Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5bb75673341f887b","local-member-id":"5950dcfe76ab9ff7","added-peer-id":"5950dcfe76ab9ff7","added-peer-peer-urls":["https://192.168.61.60:2380"]}
	{"level":"info","ts":"2024-06-03T12:11:57.876988Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.60:2380"}
	{"level":"info","ts":"2024-06-03T12:11:57.878082Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.60:2380"}
	{"level":"info","ts":"2024-06-03T12:11:57.878312Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T12:11:57.886881Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5950dcfe76ab9ff7","initial-advertise-peer-urls":["https://192.168.61.60:2380"],"listen-peer-urls":["https://192.168.61.60:2380"],"advertise-client-urls":["https://192.168.61.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T12:11:57.887176Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:11:58.32179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.321913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.321974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 received MsgPreVoteResp from 5950dcfe76ab9ff7 at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.322014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.322039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 received MsgVoteResp from 5950dcfe76ab9ff7 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.322067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5950dcfe76ab9ff7 became leader at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.322105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5950dcfe76ab9ff7 elected leader 5950dcfe76ab9ff7 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.326005Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5950dcfe76ab9ff7","local-member-attributes":"{Name:default-k8s-diff-port-196710 ClientURLs:[https://192.168.61.60:2379]}","request-path":"/0/members/5950dcfe76ab9ff7/attributes","cluster-id":"5bb75673341f887b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:11:58.327739Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.328111Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.328252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.334223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:11:58.334325Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5bb75673341f887b","local-member-id":"5950dcfe76ab9ff7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.334399Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.334435Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.337819Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.337854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.343222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.60:2379"}
	
	
	==> kernel <==
	 12:21:23 up 14 min,  0 users,  load average: 0.32, 0.21, 0.13
	Linux default-k8s-diff-port-196710 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80] <==
	I0603 12:15:19.372470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:17:00.028998       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:00.029647       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:17:01.030486       1 handler_proxy.go:93] no RequestInfo found in the context
	W0603 12:17:01.030613       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:01.030811       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:17:01.030843       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0603 12:17:01.030812       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:17:01.032042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:01.030938       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:01.031267       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:18:01.031356       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:01.032226       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:01.032314       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:18:01.032354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:01.031622       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:01.031974       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:20:01.032006       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:01.032751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:01.032979       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:20:01.033019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d] <==
	I0603 12:15:46.630622       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:16:16.188293       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:16:16.641315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:16:46.193625       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:16:46.651215       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:16.199145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:16.659466       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:46.209615       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:46.668505       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:18:14.891528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="376.47µs"
	E0603 12:18:16.216586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:16.676496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:18:28.887842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="224.286µs"
	E0603 12:18:46.221869       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:46.684263       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:19:16.227275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:16.692765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:19:46.232832       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:46.700814       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:16.239209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:16.708906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:46.244908       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:46.716234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:21:16.250093       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:21:16.725053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c] <==
	I0603 12:12:18.080204       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:12:18.354128       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.60"]
	I0603 12:12:18.801660       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:12:18.803081       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:12:18.803169       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:12:18.807556       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:12:18.809090       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:12:18.809410       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:12:18.811029       1 config.go:192] "Starting service config controller"
	I0603 12:12:18.811063       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:12:18.811143       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:12:18.811161       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:12:18.811900       1 config.go:319] "Starting node config controller"
	I0603 12:12:18.811936       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:12:18.912003       1 shared_informer.go:320] Caches are synced for node config
	I0603 12:12:18.912063       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:12:18.912102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051] <==
	W0603 12:12:00.035055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:00.035091       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:12:00.035143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.035172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:00.854395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:12:00.854453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:12:00.891275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:00.891337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:00.948612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.949232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:00.996342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:00.996685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:12:01.033007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:01.033059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:01.045058       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:01.045142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:01.177617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:01.177646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:01.244165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:01.244240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:01.262102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:01.262468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:12:01.298028       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:01.298143       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:04.517128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:19:02 default-k8s-diff-port-196710 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:19:02 default-k8s-diff-port-196710 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:19:02 default-k8s-diff-port-196710 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:19:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:19:05 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:19:05.866841    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:19:16 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:19:16.867211    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:19:27 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:19:27.867014    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:19:42 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:19:42.867983    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:19:56 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:19:56.871630    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:20:02 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:20:02.885447    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:20:02 default-k8s-diff-port-196710 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:20:02 default-k8s-diff-port-196710 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:20:02 default-k8s-diff-port-196710 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:20:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:20:08 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:20:08.867788    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:20:20 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:20:20.868866    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:20:33 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:20:33.866815    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:20:48 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:20:48.866321    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:21:01 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:21:01.867247    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:21:02 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:21:02.886290    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:21:02 default-k8s-diff-port-196710 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:21:02 default-k8s-diff-port-196710 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:21:02 default-k8s-diff-port-196710 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:21:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:21:15 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:21:15.866321    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	
	
	==> storage-provisioner [3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be] <==
	I0603 12:12:19.024003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:12:19.087620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:12:19.087800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:12:19.200059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:12:19.200196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90!
	I0603 12:12:19.202832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ae0bc27-769d-44cb-9d0e-4216ece97ab8", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90 became leader
	I0603 12:12:19.300509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lxvbp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp: exit status 1 (62.58786ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lxvbp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (546.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0603 12:13:48.589975   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:14:24.405714   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:14:34.178017   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:15:11.634236   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 12:15:15.090086   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 12:15:19.213422   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725022 -n embed-certs-725022
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:22:07.026149072 +0000 UTC m=+6226.930568952
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-725022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-725022 logs -n 25: (2.088033224s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.677015566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=065f56b1-ea5e-4ed5-be55-14734e0ce5bc name=/runtime.v1.RuntimeService/Version
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.678281365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe4297d4-61f2-4cc5-95d4-2890e278be94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.678675263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417328678654058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe4297d4-61f2-4cc5-95d4-2890e278be94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.679467972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=657cf98a-7ed4-42fb-aff2-c7f579a356ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.679541647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=657cf98a-7ed4-42fb-aff2-c7f579a356ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.679776286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=657cf98a-7ed4-42fb-aff2-c7f579a356ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.697376598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=406b6362-4e39-4624-a6d4-eaa11173489d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.697450079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=406b6362-4e39-4624-a6d4-eaa11173489d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.697958056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=406b6362-4e39-4624-a6d4-eaa11173489d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.698624872Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7579c225-4402-4e6d-9cfe-eaad2153ca2f name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.698814794Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1717416784000272049,StartedAt:1717416784049114561,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/cde9aa2d-6a26-4f83-b5df-ae24b22df27a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/cde9aa2d-6a26-4f83-b5df-ae24b22df27a/containers/storage-provisioner/8f290f9a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/cde9aa2d-6a26-4f83-b5df-ae24b22df27a/volumes/kubernetes.io~projected/kube-api-access-6825m,Readonly:true,SelinuxRelabel:fal
se,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_cde9aa2d-6a26-4f83-b5df-ae24b22df27a/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7579c225-4402-4e6d-9cfe-eaad2153ca2f name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.699312416Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5fb53c49-b37a-4cea-aef2-85caa0f8cbc0 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.699427951Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1717416783788013808,StartedAt:1717416783819933349,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/1ed6c0e0-2d13-410f-bdf1-6620fb2503ed/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1ed6c0e0-2d13-410f-bdf1-6620fb2503ed/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1ed6c0e0-2d13-410f-bdf1-6620fb2503ed/containers/coredns/dbe83d71,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1ed6c0e0-2d13-410f-bdf1-6620fb2503ed/volumes/kubernetes.io~projected/kube-api-access-v985v,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-x9fw5_1ed6c0e0-2d13-410f-bdf1-6620fb2503ed/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5fb53c49-b37a-4cea-aef2-85caa0f8cbc0 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.700300681Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,Verbose:false,}" file="otel-collector/interceptors.go:62" id=dbd6ffe6-c68a-490b-9fd4-604324beeaee name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.700625152Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1717416783567608571,StartedAt:1717416783619050689,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/0e46c731-84e4-4cb2-8125-2b61c10916a3/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0e46c731-84e4-4cb2-8125-2b61c10916a3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0e46c731-84e4-4cb2-8125-2b61c10916a3/containers/coredns/764d419b,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0e46c731-84e4-4cb2-8125-2b61c10916a3/volumes/kubernetes.io~projected/kube-api-access-4gzc4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-4gbj2_0e46c731-84e4-4cb2-8125-2b61c10916a3/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dbd6ffe6-c68a-490b-9fd4-604324beeaee name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.701076598Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=34451dbc-7645-4f7e-8c9f-f57d877cc8c3 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.701186985Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1717416782579098235,StartedAt:1717416782749370722,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7869cd1d-785d-401d-aceb-854cffd63d73/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7869cd1d-785d-401d-aceb-854cffd63d73/containers/kube-proxy/2c6cb2b9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var
/lib/kubelet/pods/7869cd1d-785d-401d-aceb-854cffd63d73/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/7869cd1d-785d-401d-aceb-854cffd63d73/volumes/kubernetes.io~projected/kube-api-access-7x48h,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-7qp6h_7869cd1d-785d-401d-aceb-854cffd63d73/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-
collector/interceptors.go:74" id=34451dbc-7645-4f7e-8c9f-f57d877cc8c3 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.701644549Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b91b06b1-a9f4-4e3e-aa77-41eb5df2d153 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.701862604Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1717416763115387677,StartedAt:1717416763226520888,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3a8815b41f8de9ce6a4245aba1cc52be/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3a8815b41f8de9ce6a4245aba1cc52be/containers/etcd/0743b82d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etc
d-embed-certs-725022_3a8815b41f8de9ce6a4245aba1cc52be/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b91b06b1-a9f4-4e3e-aa77-41eb5df2d153 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.702280957Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf4650e5-f371-4f39-a285-00c109b454fe name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.702502145Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1717416763098621660,StartedAt:1717416763226868490,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/38effa66b97159d08749fd23b6d37e6f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/38effa66b97159d08749fd23b6d37e6f/containers/kube-scheduler/d589ba38,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-725022_38effa66b97159d08749fd23b6d37e6f/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{Cp
uPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf4650e5-f371-4f39-a285-00c109b454fe name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.702966719Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ee20dfe9-91e3-4fa1-9d2f-20bb3e640673 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.703109024Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1717416763061403335,StartedAt:1717416763150404703,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6cd351b07ac0ddcdf3965a97f9c3e0b5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6cd351b07ac0ddcdf3965a97f9c3e0b5/containers/kube-controller-manager/83a312d2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-725022_6cd351b07ac0ddcdf3965a97f9c3e0b5/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cpus
etMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ee20dfe9-91e3-4fa1-9d2f-20bb3e640673 name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.703552702Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a8bf7705-75da-4f21-8b11-2026f2fb1ddb name=/runtime.v1.RuntimeService/ContainerStatus
	Jun 03 12:22:08 embed-certs-725022 crio[714]: time="2024-06-03 12:22:08.704016942Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1717416763031526411,StartedAt:1717416763130776893,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e29b26fbef49942c734e3993559250ae/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e29b26fbef49942c734e3993559250ae/containers/kube-apiserver/83c4ddf5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Conta
inerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-725022_e29b26fbef49942c734e3993559250ae/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a8bf7705-75da-4f21-8b11-2026f2fb1ddb name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81efa28c7c7dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a5fd62b332d2e       storage-provisioner
	a7c67fb6c2145       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9408a07b4022b       coredns-7db6d8ff4d-x9fw5
	a2de82fcd9e26       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d3d567d1a7ca6       coredns-7db6d8ff4d-4gbj2
	60795f3f2672b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   9e776b8b96d7b       kube-proxy-7qp6h
	2a42347479168       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   25651ba709a5c       kube-controller-manager-embed-certs-725022
	be0185f4a9d12       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   ba01582226858       etcd-embed-certs-725022
	ef7d5692a59fa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   5acdf3478feee       kube-scheduler-embed-certs-725022
	3cce6b24a5e40       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   40beb5c2d8ec5       kube-apiserver-embed-certs-725022
	
	
	==> coredns [a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-725022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-725022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=embed-certs-725022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-725022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:21:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:18:14 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:18:14 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:18:14 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:18:14 +0000   Mon, 03 Jun 2024 12:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.245
	  Hostname:    embed-certs-725022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc393dd3c9a947b68657c20168268eeb
	  System UUID:                cc393dd3-c9a9-47b6-8657-c20168268eeb
	  Boot ID:                    36ae111d-de49-4f7f-b605-475a321541fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4gbj2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-x9fw5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-725022                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-725022             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-725022    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-7qp6h                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-725022             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-jgmbs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node embed-certs-725022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-725022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node embed-certs-725022 event: Registered Node embed-certs-725022 in Controller
	
	
	==> dmesg <==
	[  +0.052805] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040215] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021337] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497873] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573664] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.954650] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.058269] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066848] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.164980] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.158383] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.293994] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.383061] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +0.058654] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.768908] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.640525] kauditd_printk_skb: 97 callbacks suppressed
	[Jun 3 12:08] kauditd_printk_skb: 79 callbacks suppressed
	[Jun 3 12:12] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.873005] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[  +6.383917] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[  +0.087666] kauditd_printk_skb: 57 callbacks suppressed
	[Jun 3 12:13] systemd-fstab-generator[4138]: Ignoring "noauto" option for root device
	[  +0.146312] kauditd_printk_skb: 12 callbacks suppressed
	[Jun 3 12:14] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050] <==
	{"level":"info","ts":"2024-06-03T12:12:43.381238Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T12:12:43.381435Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c3d3313e1e359742","initial-advertise-peer-urls":["https://192.168.72.245:2380"],"listen-peer-urls":["https://192.168.72.245:2380"],"advertise-client-urls":["https://192.168.72.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T12:12:43.381482Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:12:43.381568Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.245:2380"}
	{"level":"info","ts":"2024-06-03T12:12:43.381598Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.245:2380"}
	{"level":"info","ts":"2024-06-03T12:12:43.382622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 switched to configuration voters=(14110676200346457922)"}
	{"level":"info","ts":"2024-06-03T12:12:43.386799Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b0c78e7fea9a901","local-member-id":"c3d3313e1e359742","added-peer-id":"c3d3313e1e359742","added-peer-peer-urls":["https://192.168.72.245:2380"]}
	{"level":"info","ts":"2024-06-03T12:12:43.73179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 received MsgPreVoteResp from c3d3313e1e359742 at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.73191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 received MsgVoteResp from c3d3313e1e359742 at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.731918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became leader at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.731929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c3d3313e1e359742 elected leader c3d3313e1e359742 at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.735504Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c3d3313e1e359742","local-member-attributes":"{Name:embed-certs-725022 ClientURLs:[https://192.168.72.245:2379]}","request-path":"/0/members/c3d3313e1e359742/attributes","cluster-id":"3b0c78e7fea9a901","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:12:43.735657Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:12:43.735808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:12:43.740008Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.749459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.245:2379"}
	{"level":"info","ts":"2024-06-03T12:12:43.749886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b0c78e7fea9a901","local-member-id":"c3d3313e1e359742","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.750003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.750047Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.752768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:12:43.754742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:12:43.755608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:22:09 up 14 min,  0 users,  load average: 0.53, 0.28, 0.16
	Linux embed-certs-725022 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a] <==
	I0603 12:16:04.532920       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:17:45.639080       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:45.639482       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:17:46.640410       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:46.640456       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:17:46.640467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:17:46.640545       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:17:46.640589       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:17:46.641749       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:46.640812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:46.640887       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:18:46.640900       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:18:46.642947       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:18:46.643110       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:18:46.643141       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:46.641346       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:46.641645       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:20:46.641675       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:20:46.643610       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:20:46.643680       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:20:46.643686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce] <==
	I0603 12:16:31.837378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:01.374950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:01.845162       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:17:31.379314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:17:31.852628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:18:01.386094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:01.860305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:18:31.391435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:18:31.867788       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:18:42.540964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="528.812µs"
	I0603 12:18:56.536771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="862.348µs"
	E0603 12:19:01.396978       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:01.877751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:19:31.402118       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:19:31.885270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:01.409321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:01.894903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:20:31.417221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:20:31.903464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:21:01.422067       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:21:01.911185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:21:31.427415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:21:31.919677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:22:01.433331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:22:01.929473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7] <==
	I0603 12:13:02.955916       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:13:02.986572       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.245"]
	I0603 12:13:03.096979       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:13:03.097032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:13:03.097049       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:13:03.102960       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:13:03.103154       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:13:03.103167       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:13:03.104382       1 config.go:192] "Starting service config controller"
	I0603 12:13:03.104401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:13:03.104433       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:13:03.104436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:13:03.106384       1 config.go:319] "Starting node config controller"
	I0603 12:13:03.106394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:13:03.205861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:13:03.205927       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:13:03.207318       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68] <==
	W0603 12:12:45.652953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:45.653187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.516064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:12:46.516184       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:12:46.546322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:46.546665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:46.552971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:46.553228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:12:46.631556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:46.631874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:12:46.729881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:46.729930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:46.736301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:46.736354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:46.751279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:12:46.751405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:12:46.850466       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:46.851112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.914993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:46.915236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.920558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:46.920678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:47.087056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:47.087119       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:49.447827       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:19:48 embed-certs-725022 kubelet[3936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:19:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:19:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:19:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:19:59 embed-certs-725022 kubelet[3936]: E0603 12:19:59.521943    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:20:12 embed-certs-725022 kubelet[3936]: E0603 12:20:12.522783    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:20:25 embed-certs-725022 kubelet[3936]: E0603 12:20:25.522830    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:20:39 embed-certs-725022 kubelet[3936]: E0603 12:20:39.521552    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:20:48 embed-certs-725022 kubelet[3936]: E0603 12:20:48.546230    3936 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:20:48 embed-certs-725022 kubelet[3936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:20:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:20:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:20:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:20:53 embed-certs-725022 kubelet[3936]: E0603 12:20:53.522185    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:21:04 embed-certs-725022 kubelet[3936]: E0603 12:21:04.522212    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:21:15 embed-certs-725022 kubelet[3936]: E0603 12:21:15.522375    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:21:28 embed-certs-725022 kubelet[3936]: E0603 12:21:28.522100    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:21:41 embed-certs-725022 kubelet[3936]: E0603 12:21:41.521861    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:21:48 embed-certs-725022 kubelet[3936]: E0603 12:21:48.545925    3936 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:21:48 embed-certs-725022 kubelet[3936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:21:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:21:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:21:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:21:52 embed-certs-725022 kubelet[3936]: E0603 12:21:52.523499    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:22:05 embed-certs-725022 kubelet[3936]: E0603 12:22:05.522573    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	
	
	==> storage-provisioner [81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf] <==
	I0603 12:13:04.066369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:13:04.094510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:13:04.094604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:13:04.112396       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:13:04.112935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6!
	I0603 12:13:04.113150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5033375e-f80d-4568-bcf5-5027938c3121", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6 became leader
	I0603 12:13:04.213489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725022 -n embed-certs-725022
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-725022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jgmbs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs: exit status 1 (60.673347ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jgmbs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:15:38.324555   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:15:47.450711   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:15:57.223190   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:16:32.014864   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:16:59.128820   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:17:01.370388   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:17:12.038281   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:17:54.814814   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:17:55.060289   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:18:22.173839   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:18:48.589700   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:19:17.858899   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:19:24.405323   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:19:34.178403   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:20:19.213097   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:20:38.324259   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:21:32.014742   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:21:59.128015   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:22:12.038170   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:22:54.814618   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:23:22.262021   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:23:48.590704   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:24:24.404686   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (229.7784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-905554" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (221.250854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25
E0603 12:24:34.177788   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25: (1.597552686s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.833608241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417474833570867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c57635c-fe7c-42f9-af34-8fbba249fc8f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.834283379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7c6e014-2b20-4399-9c24-dfb24f71cc4e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.834359980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7c6e014-2b20-4399-9c24-dfb24f71cc4e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.834399098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e7c6e014-2b20-4399-9c24-dfb24f71cc4e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.867110137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef8b62da-1db9-46b5-89f7-cf5c8c85f62f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.867279488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef8b62da-1db9-46b5-89f7-cf5c8c85f62f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.868917116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0dd104d-c275-47fe-ac78-2f025179bc91 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.869359507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417474869335401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0dd104d-c275-47fe-ac78-2f025179bc91 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.869994326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6146accd-c72c-48ab-80a8-de34d3f1f6f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.870067565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6146accd-c72c-48ab-80a8-de34d3f1f6f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.870101491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6146accd-c72c-48ab-80a8-de34d3f1f6f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.903005589Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=979d3069-e689-4607-9ae0-b4de5876ba74 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.903126358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=979d3069-e689-4607-9ae0-b4de5876ba74 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.905421982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2544086e-717d-4ffa-856c-eaf1c367329f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.905860415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417474905816262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2544086e-717d-4ffa-856c-eaf1c367329f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.906697904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d8a7b0d-e6fe-44b2-9b49-9ef78ef7d477 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.906793175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d8a7b0d-e6fe-44b2-9b49-9ef78ef7d477 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.906851741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3d8a7b0d-e6fe-44b2-9b49-9ef78ef7d477 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.938886226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adbd7103-7e77-488e-8d20-a1c2c62b1f63 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.939038906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adbd7103-7e77-488e-8d20-a1c2c62b1f63 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.940243622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc2d1085-1a59-4ba0-91d7-01e3f1d7440c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.940640251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417474940613187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc2d1085-1a59-4ba0-91d7-01e3f1d7440c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.941383626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad8f9488-77f1-4676-b720-ce1775aa16e2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.941439193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad8f9488-77f1-4676-b720-ce1775aa16e2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:24:34 old-k8s-version-905554 crio[644]: time="2024-06-03 12:24:34.941475641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad8f9488-77f1-4676-b720-ce1775aa16e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067618] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.055262] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.836862] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470521] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.722215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.941743] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.062404] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063439] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.196803] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.150293] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.306355] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.730916] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.064798] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.734692] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.126331] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 12:11] systemd-fstab-generator[5043]: Ignoring "noauto" option for root device
	[Jun 3 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.071630] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:24:35 up 17 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux old-k8s-version-905554 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: net.(*sysDialer).dialSerial(0xc0008ee800, 0x4f7fe40, 0xc000318ea0, 0xc0008c5dd0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: net.(*Dialer).DialContext(0xc0001da900, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009f8a50, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000753e60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009f8a50, 0x24, 0x60, 0x7fe8c403f600, 0x118, ...)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: net/http.(*Transport).dial(0xc000a0b040, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009f8a50, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: net/http.(*Transport).dialConn(0xc000a0b040, 0x4f7fe00, 0xc000120018, 0x0, 0xc000101d40, 0x5, 0xc0009f8a50, 0x24, 0x0, 0xc0008ed680, ...)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: net/http.(*Transport).dialConnFor(0xc000a0b040, 0xc000995a20)
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]: created by net/http.(*Transport).queueForDial
	Jun 03 12:24:30 old-k8s-version-905554 kubelet[6499]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jun 03 12:24:30 old-k8s-version-905554 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 03 12:24:30 old-k8s-version-905554 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 03 12:24:31 old-k8s-version-905554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 03 12:24:31 old-k8s-version-905554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 12:24:31 old-k8s-version-905554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 12:24:31 old-k8s-version-905554 kubelet[6508]: I0603 12:24:31.334679    6508 server.go:416] Version: v1.20.0
	Jun 03 12:24:31 old-k8s-version-905554 kubelet[6508]: I0603 12:24:31.335035    6508 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 12:24:31 old-k8s-version-905554 kubelet[6508]: I0603 12:24:31.337094    6508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 12:24:31 old-k8s-version-905554 kubelet[6508]: W0603 12:24:31.338942    6508 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 03 12:24:31 old-k8s-version-905554 kubelet[6508]: I0603 12:24:31.339932    6508 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (223.048929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-905554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (345.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-602118 -n no-preload-602118
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:27:08.538786234 +0000 UTC m=+6528.443206126
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-602118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-602118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.821µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-602118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-602118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-602118 logs -n 25: (1.216525115s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	| start   | -p newest-cni-756935 --memory=2200 --alsologtostderr   | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:27:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:27:04.275414   80344 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:27:04.275696   80344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:04.275707   80344 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:04.275711   80344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:04.275936   80344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:27:04.276602   80344 out.go:298] Setting JSON to false
	I0603 12:27:04.277624   80344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7769,"bootTime":1717409855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:27:04.277682   80344 start.go:139] virtualization: kvm guest
	I0603 12:27:04.279985   80344 out.go:177] * [newest-cni-756935] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:27:04.281962   80344 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:27:04.281923   80344 notify.go:220] Checking for updates...
	I0603 12:27:04.283266   80344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:27:04.284793   80344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:27:04.286045   80344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:27:04.287414   80344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:27:04.288611   80344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:27:04.290220   80344 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290336   80344 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290440   80344 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290543   80344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:27:04.328615   80344 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:27:04.329729   80344 start.go:297] selected driver: kvm2
	I0603 12:27:04.329747   80344 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:27:04.329762   80344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:27:04.330714   80344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:27:04.330792   80344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:27:04.346317   80344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:27:04.346374   80344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0603 12:27:04.346434   80344 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0603 12:27:04.346722   80344 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 12:27:04.346781   80344 cni.go:84] Creating CNI manager for ""
	I0603 12:27:04.346793   80344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:27:04.346800   80344 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:27:04.346856   80344 start.go:340] cluster config:
	{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:27:04.346953   80344 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:27:04.348876   80344 out.go:177] * Starting "newest-cni-756935" primary control-plane node in "newest-cni-756935" cluster
	I0603 12:27:04.349993   80344 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:27:04.350031   80344 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:27:04.350043   80344 cache.go:56] Caching tarball of preloaded images
	I0603 12:27:04.350128   80344 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:27:04.350138   80344 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:27:04.350216   80344 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json ...
	I0603 12:27:04.350232   80344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json: {Name:mke47539e9b14ee756d0e1756e2aee20fecc5c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:27:04.350350   80344 start.go:360] acquireMachinesLock for newest-cni-756935: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:27:04.350377   80344 start.go:364] duration metric: took 14.246µs to acquireMachinesLock for "newest-cni-756935"
	I0603 12:27:04.350393   80344 start.go:93] Provisioning new machine with config: &{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:27:04.350447   80344 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.157715538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417629157693072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dde92f47-d76b-4f5f-8cf3-55bfbb072784 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.158470814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=883e39d5-67a7-43c6-bcdb-44f801e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.158545306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=883e39d5-67a7-43c6-bcdb-44f801e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.158793576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=883e39d5-67a7-43c6-bcdb-44f801e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.202528308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df7b7c36-311e-4624-80d5-0b1401150036 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.202605299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df7b7c36-311e-4624-80d5-0b1401150036 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.203496138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45882be9-758a-429d-9c8a-84adc1075709 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.203992482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417629203966745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45882be9-758a-429d-9c8a-84adc1075709 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.204461247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53a470fc-f52e-46c2-a896-eade6ca3c114 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.204530730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53a470fc-f52e-46c2-a896-eade6ca3c114 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.205045334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53a470fc-f52e-46c2-a896-eade6ca3c114 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.251346510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=908735d7-d366-4d15-aff8-d7a45698ce85 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.251443561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=908735d7-d366-4d15-aff8-d7a45698ce85 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.252593270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ab3234d-f73f-460f-892b-73482e0f0930 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.253046103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417629253020940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ab3234d-f73f-460f-892b-73482e0f0930 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.253693908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5b79da8-dba1-411e-b5c8-4259a0adfe93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.253751034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5b79da8-dba1-411e-b5c8-4259a0adfe93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.253973764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5b79da8-dba1-411e-b5c8-4259a0adfe93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.290350868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=054f25e6-e3c2-43fd-9962-3d506f8cea3a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.290442360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=054f25e6-e3c2-43fd-9962-3d506f8cea3a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.291680712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6404ec0-d23b-490b-9044-110cfc15fd42 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.292171927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417629292148127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6404ec0-d23b-490b-9044-110cfc15fd42 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.292624914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=463864bc-7063-4d26-b721-dfaacf076fef name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.292694124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=463864bc-7063-4d26-b721-dfaacf076fef name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:09 no-preload-602118 crio[725]: time="2024-06-03 12:27:09.292980779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a,PodSandboxId:2f9f6e560130ad503a3fb16cd826de68b079d3d261c3ffd9adc7f38a9347fae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738073641787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9e7c2b-91a9-4394-8a08-a2c076d4b42d,},Annotations:map[string]string{io.kubernetes.container.hash: cf055258,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef,PodSandboxId:0ca5bf52da27342b0de4a904a42d2aa48c23283ba6c2596613b1dafa6930796d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738154688772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dwptw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0437fe-8e83-4acc-a92a-af29bf06db93,},Annotations:map[string]string{io.kubernetes.container.hash: 2dc52ed7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93,PodSandboxId:220fbd721c9026875219d04619cd68d29f31d0a7201cb29af349244390275c37,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416737985994074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5gmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
4da426-9414-4a30-8b19-14e555e192de,},Annotations:map[string]string{io.kubernetes.container.hash: 4251bef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf,PodSandboxId:036d89d7ad7f4e90bb88f12b72cf2c85bda55787a8ea5c62e674afc2975e95a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717416737288481480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6502635-478f-443c-8186-ab0616fcf4ac,},Annotations:map[string]string{io.kubernetes.container.hash: c6c54951,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070,PodSandboxId:86c744cb98f883a17a7004ff42bc11b8b8552a59f6a891044c0212e97dcddc61,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717516790929,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae3562eee63d85017986173f61212ec0,},Annotations:map[string]string{io.kubernetes.container.hash: 60aa7df7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd,PodSandboxId:a448b605ab5ec3bbd85200834bdb578a6d5e0e13e90c44098ef27993c0ee4975,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717487116516,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17345709021d24cb267b0ce4add83645,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9,PodSandboxId:2c6440b78a8dd4e2e77af45787f6078df707872b27812b40bbac493b2053c406,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717452361331,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a568811ec88d614b45e242281e5693a1,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0,PodSandboxId:769d1926d74f4c8afaa808a0c440b0bd180ec0aea00d6a5e5e6713612b2fd60b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416717376494113,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-602118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11c3fa6ec0cc81f29fe8e779d24c5099,},Annotations:map[string]string{io.kubernetes.container.hash: ad82f0a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=463864bc-7063-4d26-b721-dfaacf076fef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9816663d6329       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   0ca5bf52da273       coredns-7db6d8ff4d-dwptw
	fb248b003c861       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2f9f6e560130a       storage-provisioner
	584c23eaff7fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   220fbd721c902       coredns-7db6d8ff4d-5gmj5
	0f95b604096bb       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   14 minutes ago      Running             kube-proxy                0                   036d89d7ad7f4       kube-proxy-tfxkl
	998c79f6f292c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   86c744cb98f88       etcd-no-preload-602118
	f7a1aa13e70aa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   15 minutes ago      Running             kube-scheduler            2                   a448b605ab5ec       kube-scheduler-no-preload-602118
	1d6486b810f4f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   15 minutes ago      Running             kube-controller-manager   2                   2c6440b78a8dd       kube-controller-manager-no-preload-602118
	6e010bfa69d81       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   15 minutes ago      Running             kube-apiserver            2                   769d1926d74f4       kube-apiserver-no-preload-602118
	
	
	==> coredns [584c23eaff7fc97fc20866acace2641a918972ddde4bc15dd68a27fbc2575e93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e9816663d632930c457f52b65f3b813075b3e6e49e03572471737d14171a2bef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-602118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-602118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=no-preload-602118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-602118
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:27:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:22:33 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:22:33 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:22:33 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:22:33 +0000   Mon, 03 Jun 2024 12:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.245
	  Hostname:    no-preload-602118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e98a529f012d4a0988904e7d0cb7a70c
	  System UUID:                e98a529f-012d-4a09-8890-4e7d0cb7a70c
	  Boot ID:                    8ea9d02b-256a-4d4f-a148-b6b987af69da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5gmj5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-dwptw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-602118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-602118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-602118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-tfxkl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-602118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-zpzbw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-602118 status is now: NodeHasSufficientMemory
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-602118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-602118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-602118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-602118 event: Registered Node no-preload-602118 in Controller
	
	
	==> dmesg <==
	[  +0.040299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.485068] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.368632] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.590241] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.274292] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.054295] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059499] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.178092] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.112663] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.269422] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.624695] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.064440] kauditd_printk_skb: 130 callbacks suppressed
	[Jun 3 12:07] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +5.645798] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.569499] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.463426] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 12:11] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.963399] systemd-fstab-generator[4013]: Ignoring "noauto" option for root device
	[Jun 3 12:12] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.851515] systemd-fstab-generator[4341]: Ignoring "noauto" option for root device
	[ +13.399552] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	[  +0.100564] kauditd_printk_skb: 14 callbacks suppressed
	[Jun 3 12:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [998c79f6f292c8080164980650e8a76e11e68daf494b4c6c492f744b50266070] <==
	{"level":"info","ts":"2024-06-03T12:11:57.970414Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:11:58.894643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgPreVoteResp from 8287693677e84cf6 at term 1"}
	{"level":"info","ts":"2024-06-03T12:11:58.894715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.894722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgVoteResp from 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.89473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became leader at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.894737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8287693677e84cf6 elected leader 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-06-03T12:11:58.899076Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.901009Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8287693677e84cf6","local-member-attributes":"{Name:no-preload-602118 ClientURLs:[https://192.168.50.245:2379]}","request-path":"/0/members/8287693677e84cf6/attributes","cluster-id":"6e727aea1cd049c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:11:58.90138Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.901812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.902012Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902078Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902124Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.902183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.902209Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.903723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:11:58.9089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.245:2379"}
	{"level":"info","ts":"2024-06-03T12:21:58.936183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":676}
	{"level":"info","ts":"2024-06-03T12:21:58.945048Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":676,"took":"8.473688ms","hash":975296969,"current-db-size-bytes":2076672,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2076672,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-03T12:21:58.945128Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":975296969,"revision":676,"compact-revision":-1}
	{"level":"info","ts":"2024-06-03T12:26:58.943741Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":919}
	{"level":"info","ts":"2024-06-03T12:26:58.947681Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":919,"took":"3.338782ms","hash":1606516095,"current-db-size-bytes":2076672,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-06-03T12:26:58.947756Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1606516095,"revision":919,"compact-revision":676}
	
	
	==> kernel <==
	 12:27:09 up 20 min,  0 users,  load average: 0.10, 0.22, 0.17
	Linux no-preload-602118 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e010bfa69d81ba01cf7bcf124df98ca87e190ccc661236d4a419343715a3ae0] <==
	I0603 12:22:01.307170       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:23:01.306360       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:23:01.306434       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:23:01.306443       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:23:01.307667       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:23:01.307752       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:23:01.307759       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:25:01.307488       1 handler_proxy.go:93] no RequestInfo found in the context
	W0603 12:25:01.307893       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:25:01.307912       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:25:01.308034       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0603 12:25:01.308001       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:25:01.310005       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:27:00.310686       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:00.311063       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:27:01.311526       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:01.311637       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:27:01.311667       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:27:01.311578       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:01.311772       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:27:01.313094       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d6486b810f4fea2b78f7e1b4375f6351128af8f4f98ae77b3171090ee6ba3e9] <==
	I0603 12:21:16.562289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:21:46.091121       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:21:46.571640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:22:16.096906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:22:16.581153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:22:46.102168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:22:46.590062       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:22:59.109539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="173.477µs"
	I0603 12:23:14.107002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="50.391µs"
	E0603 12:23:16.110757       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:16.598147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:23:46.117778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:46.607178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:24:16.123758       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:16.615770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:24:46.129512       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:46.623984       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:16.135661       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:16.631765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:46.141743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:46.640172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:16.147969       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:16.647223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:46.155534       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:46.655595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f95b604096bb9c35ddcde873a44214fcf5bb4a1918d3767b43aeba25088ceaf] <==
	I0603 12:12:17.965944       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:12:18.092647       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.245"]
	I0603 12:12:18.460283       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:12:18.460331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:12:18.460355       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:12:18.469374       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:12:18.469630       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:12:18.469664       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:12:18.476306       1 config.go:192] "Starting service config controller"
	I0603 12:12:18.476345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:12:18.476366       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:12:18.476369       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:12:18.476672       1 config.go:319] "Starting node config controller"
	I0603 12:12:18.476704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:12:18.580162       1 shared_informer.go:320] Caches are synced for node config
	I0603 12:12:18.580247       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:12:18.580298       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7a1aa13e70aab48903fd4acfe8e726e044c09fd249ad876985082b7d2ce28dd] <==
	E0603 12:12:00.331865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.331876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:00.331883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:00.331890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:00.331956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:00.331962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.332021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:00.332084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:01.156895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:12:01.156944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:12:01.211455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:01.211506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:01.285806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:01.286304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:01.313084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:01.313111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:12:01.398965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:01.399016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:01.424856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:01.424903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:01.742109       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:01.742146       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:04.620012       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:25:03 no-preload-602118 kubelet[4348]: E0603 12:25:03.149203    4348 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:25:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:25:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:25:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:25:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:25:05 no-preload-602118 kubelet[4348]: E0603 12:25:05.094927    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:25:19 no-preload-602118 kubelet[4348]: E0603 12:25:19.092809    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:25:34 no-preload-602118 kubelet[4348]: E0603 12:25:34.093443    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:25:46 no-preload-602118 kubelet[4348]: E0603 12:25:46.092450    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:25:59 no-preload-602118 kubelet[4348]: E0603 12:25:59.092275    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:26:03 no-preload-602118 kubelet[4348]: E0603 12:26:03.148192    4348 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:26:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:26:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:26:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:26:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:26:13 no-preload-602118 kubelet[4348]: E0603 12:26:13.093528    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:26:28 no-preload-602118 kubelet[4348]: E0603 12:26:28.092942    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:26:43 no-preload-602118 kubelet[4348]: E0603 12:26:43.092238    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:26:58 no-preload-602118 kubelet[4348]: E0603 12:26:58.092801    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	Jun 03 12:27:03 no-preload-602118 kubelet[4348]: E0603 12:27:03.150662    4348 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:27:03 no-preload-602118 kubelet[4348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:27:03 no-preload-602118 kubelet[4348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:27:03 no-preload-602118 kubelet[4348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:27:03 no-preload-602118 kubelet[4348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:27:09 no-preload-602118 kubelet[4348]: E0603 12:27:09.094249    4348 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpzbw" podUID="b28cb265-532b-41ea-a242-001a85174a35"
	
	
	==> storage-provisioner [fb248b003c8613b37b12ff79e1f222cab5c038f18c53dd238b97760ebdd1686a] <==
	I0603 12:12:18.548114       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:12:18.568714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:12:18.568811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:12:18.592471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:12:18.592747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d!
	I0603 12:12:18.593291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ea1aae2-6280-4f10-a8c2-e37d926441ba", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d became leader
	I0603 12:12:18.693799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-602118_305b539c-d750-4a4a-a70e-9e6e96fc159d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-602118 -n no-preload-602118
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-602118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zpzbw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw: exit status 1 (61.980612ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zpzbw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-602118 describe pod metrics-server-569cc877fc-zpzbw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (345.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (426.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:28:31.580829415 +0000 UTC m=+6611.485249301
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-196710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.753µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-196710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-196710 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-196710 logs -n 25: (1.278317344s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	| start   | -p newest-cni-756935 --memory=2200 --alsologtostderr   | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	| delete  | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	| addons  | enable metrics-server -p newest-cni-756935             | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-756935                                   | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-756935                  | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-756935 --memory=2200 --alsologtostderr   | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:28:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:28:09.858979   81309 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:28:09.859237   81309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:28:09.859247   81309 out.go:304] Setting ErrFile to fd 2...
	I0603 12:28:09.859252   81309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:28:09.859419   81309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:28:09.859938   81309 out.go:298] Setting JSON to false
	I0603 12:28:09.860797   81309 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7835,"bootTime":1717409855,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:28:09.860853   81309 start.go:139] virtualization: kvm guest
	I0603 12:28:09.862960   81309 out.go:177] * [newest-cni-756935] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:28:09.864440   81309 notify.go:220] Checking for updates...
	I0603 12:28:09.864460   81309 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:28:09.865996   81309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:28:09.867220   81309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:28:09.868391   81309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:28:09.869586   81309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:28:09.870664   81309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:28:09.872191   81309 config.go:182] Loaded profile config "newest-cni-756935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:28:09.872569   81309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:28:09.872606   81309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:28:09.887024   81309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0603 12:28:09.887383   81309 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:28:09.887855   81309 main.go:141] libmachine: Using API Version  1
	I0603 12:28:09.887874   81309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:28:09.888222   81309 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:28:09.888414   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:09.888647   81309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:28:09.888901   81309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:28:09.888931   81309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:28:09.902655   81309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0603 12:28:09.903006   81309 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:28:09.903453   81309 main.go:141] libmachine: Using API Version  1
	I0603 12:28:09.903471   81309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:28:09.903769   81309 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:28:09.903916   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:09.936454   81309 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:28:09.937735   81309 start.go:297] selected driver: kvm2
	I0603 12:28:09.937762   81309 start.go:901] validating driver "kvm2" against &{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:28:09.937935   81309 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:28:09.938877   81309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:28:09.938977   81309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:28:09.952590   81309 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:28:09.953012   81309 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 12:28:09.953043   81309 cni.go:84] Creating CNI manager for ""
	I0603 12:28:09.953053   81309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:28:09.953114   81309 start.go:340] cluster config:
	{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:28:09.953255   81309 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:28:09.954766   81309 out.go:177] * Starting "newest-cni-756935" primary control-plane node in "newest-cni-756935" cluster
	I0603 12:28:09.956016   81309 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:28:09.956060   81309 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:28:09.956077   81309 cache.go:56] Caching tarball of preloaded images
	I0603 12:28:09.956208   81309 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:28:09.956228   81309 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:28:09.956355   81309 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json ...
	I0603 12:28:09.956575   81309 start.go:360] acquireMachinesLock for newest-cni-756935: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:28:09.956622   81309 start.go:364] duration metric: took 27.688µs to acquireMachinesLock for "newest-cni-756935"
	I0603 12:28:09.956649   81309 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:28:09.956658   81309 fix.go:54] fixHost starting: 
	I0603 12:28:09.956961   81309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:28:09.956995   81309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:28:09.970715   81309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0603 12:28:09.971090   81309 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:28:09.971623   81309 main.go:141] libmachine: Using API Version  1
	I0603 12:28:09.971641   81309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:28:09.971958   81309 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:28:09.972141   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:09.972286   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetState
	I0603 12:28:09.973792   81309 fix.go:112] recreateIfNeeded on newest-cni-756935: state=Stopped err=<nil>
	I0603 12:28:09.973813   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	W0603 12:28:09.973979   81309 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:28:09.975753   81309 out.go:177] * Restarting existing kvm2 VM for "newest-cni-756935" ...
	I0603 12:28:09.976888   81309 main.go:141] libmachine: (newest-cni-756935) Calling .Start
	I0603 12:28:09.977056   81309 main.go:141] libmachine: (newest-cni-756935) Ensuring networks are active...
	I0603 12:28:09.977768   81309 main.go:141] libmachine: (newest-cni-756935) Ensuring network default is active
	I0603 12:28:09.978087   81309 main.go:141] libmachine: (newest-cni-756935) Ensuring network mk-newest-cni-756935 is active
	I0603 12:28:09.978473   81309 main.go:141] libmachine: (newest-cni-756935) Getting domain xml...
	I0603 12:28:09.979277   81309 main.go:141] libmachine: (newest-cni-756935) Creating domain...
	I0603 12:28:11.182609   81309 main.go:141] libmachine: (newest-cni-756935) Waiting to get IP...
	I0603 12:28:11.183463   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:11.183984   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:11.184072   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:11.183974   81344 retry.go:31] will retry after 253.498865ms: waiting for machine to come up
	I0603 12:28:11.439543   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:11.439962   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:11.439990   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:11.439895   81344 retry.go:31] will retry after 284.577473ms: waiting for machine to come up
	I0603 12:28:11.726479   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:11.726904   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:11.726929   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:11.726869   81344 retry.go:31] will retry after 367.919398ms: waiting for machine to come up
	I0603 12:28:12.095996   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:12.096558   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:12.096584   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:12.096510   81344 retry.go:31] will retry after 493.715572ms: waiting for machine to come up
	I0603 12:28:12.592176   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:12.592628   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:12.592652   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:12.592585   81344 retry.go:31] will retry after 628.342813ms: waiting for machine to come up
	I0603 12:28:13.222422   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:13.222790   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:13.222814   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:13.222750   81344 retry.go:31] will retry after 942.250543ms: waiting for machine to come up
	I0603 12:28:14.166825   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:14.167222   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:14.167246   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:14.167162   81344 retry.go:31] will retry after 863.565427ms: waiting for machine to come up
	I0603 12:28:15.031853   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:15.032223   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:15.032262   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:15.032202   81344 retry.go:31] will retry after 976.884629ms: waiting for machine to come up
	I0603 12:28:16.010611   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:16.011065   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:16.011094   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:16.011013   81344 retry.go:31] will retry after 1.41792624s: waiting for machine to come up
	I0603 12:28:17.430509   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:17.430998   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:17.431027   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:17.430981   81344 retry.go:31] will retry after 2.049194696s: waiting for machine to come up
	I0603 12:28:19.483211   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:19.483643   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:19.483665   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:19.483609   81344 retry.go:31] will retry after 2.394414822s: waiting for machine to come up
	I0603 12:28:21.880199   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:21.880641   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:21.880668   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:21.880597   81344 retry.go:31] will retry after 2.40574414s: waiting for machine to come up
	I0603 12:28:24.288191   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:24.288556   81309 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:28:24.288578   81309 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:28:24.288525   81344 retry.go:31] will retry after 2.993345223s: waiting for machine to come up
	I0603 12:28:27.284419   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.284912   81309 main.go:141] libmachine: (newest-cni-756935) Found IP for machine: 192.168.39.127
	I0603 12:28:27.284936   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has current primary IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.284943   81309 main.go:141] libmachine: (newest-cni-756935) Reserving static IP address...
	I0603 12:28:27.285756   81309 main.go:141] libmachine: (newest-cni-756935) Reserved static IP address: 192.168.39.127
	I0603 12:28:27.285774   81309 main.go:141] libmachine: (newest-cni-756935) Waiting for SSH to be available...
	I0603 12:28:27.285808   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "newest-cni-756935", mac: "52:54:00:fc:11:a0", ip: "192.168.39.127"} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.285837   81309 main.go:141] libmachine: (newest-cni-756935) DBG | skip adding static IP to network mk-newest-cni-756935 - found existing host DHCP lease matching {name: "newest-cni-756935", mac: "52:54:00:fc:11:a0", ip: "192.168.39.127"}
	I0603 12:28:27.285852   81309 main.go:141] libmachine: (newest-cni-756935) DBG | Getting to WaitForSSH function...
	I0603 12:28:27.288740   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.289152   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.289179   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.289281   81309 main.go:141] libmachine: (newest-cni-756935) DBG | Using SSH client type: external
	I0603 12:28:27.289321   81309 main.go:141] libmachine: (newest-cni-756935) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa (-rw-------)
	I0603 12:28:27.289364   81309 main.go:141] libmachine: (newest-cni-756935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:28:27.289382   81309 main.go:141] libmachine: (newest-cni-756935) DBG | About to run SSH command:
	I0603 12:28:27.289392   81309 main.go:141] libmachine: (newest-cni-756935) DBG | exit 0
	I0603 12:28:27.419067   81309 main.go:141] libmachine: (newest-cni-756935) DBG | SSH cmd err, output: <nil>: 
	I0603 12:28:27.419456   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetConfigRaw
	I0603 12:28:27.420085   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetIP
	I0603 12:28:27.422563   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.422917   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.422951   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.423191   81309 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json ...
	I0603 12:28:27.423402   81309 machine.go:94] provisionDockerMachine start ...
	I0603 12:28:27.423423   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:27.423631   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:27.426000   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.426284   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.426311   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.426462   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:27.426669   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.426838   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.427008   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:27.427216   81309 main.go:141] libmachine: Using SSH client type: native
	I0603 12:28:27.427425   81309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 12:28:27.427444   81309 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:28:27.543580   81309 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:28:27.543608   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetMachineName
	I0603 12:28:27.543868   81309 buildroot.go:166] provisioning hostname "newest-cni-756935"
	I0603 12:28:27.543893   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetMachineName
	I0603 12:28:27.544224   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:27.547006   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.547371   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.547400   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.547512   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:27.547696   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.547871   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.548069   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:27.548360   81309 main.go:141] libmachine: Using SSH client type: native
	I0603 12:28:27.548587   81309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 12:28:27.548605   81309 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-756935 && echo "newest-cni-756935" | sudo tee /etc/hostname
	I0603 12:28:27.677798   81309 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-756935
	
	I0603 12:28:27.677830   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:27.680859   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.681205   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.681230   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.681410   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:27.681701   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.681886   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.682023   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:27.682181   81309 main.go:141] libmachine: Using SSH client type: native
	I0603 12:28:27.682395   81309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 12:28:27.682414   81309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-756935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-756935/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-756935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:28:27.806132   81309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:28:27.806166   81309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:28:27.806184   81309 buildroot.go:174] setting up certificates
	I0603 12:28:27.806191   81309 provision.go:84] configureAuth start
	I0603 12:28:27.806205   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetMachineName
	I0603 12:28:27.806481   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetIP
	I0603 12:28:27.809304   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.809698   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.809728   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.809861   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:27.812030   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.812380   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.812403   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.812505   81309 provision.go:143] copyHostCerts
	I0603 12:28:27.812570   81309 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:28:27.812583   81309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:28:27.812644   81309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:28:27.812729   81309 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:28:27.812736   81309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:28:27.812759   81309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:28:27.812806   81309 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:28:27.812813   81309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:28:27.812832   81309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:28:27.812874   81309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.newest-cni-756935 san=[127.0.0.1 192.168.39.127 localhost minikube newest-cni-756935]
	I0603 12:28:27.958511   81309 provision.go:177] copyRemoteCerts
	I0603 12:28:27.958564   81309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:28:27.958586   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:27.961096   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.961416   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:27.961438   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:27.961589   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:27.961776   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:27.961931   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:27.962063   81309 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa Username:docker}
	I0603 12:28:28.049280   81309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:28:28.077139   81309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:28:28.103924   81309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:28:28.130187   81309 provision.go:87] duration metric: took 323.984868ms to configureAuth
	I0603 12:28:28.130210   81309 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:28:28.130408   81309 config.go:182] Loaded profile config "newest-cni-756935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:28:28.130493   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:28.133064   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.133434   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.133466   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.133651   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:28.133841   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.133980   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.134132   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:28.134314   81309 main.go:141] libmachine: Using SSH client type: native
	I0603 12:28:28.134495   81309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 12:28:28.134521   81309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:28:28.414807   81309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:28:28.414831   81309 machine.go:97] duration metric: took 991.415605ms to provisionDockerMachine
	I0603 12:28:28.414867   81309 start.go:293] postStartSetup for "newest-cni-756935" (driver="kvm2")
	I0603 12:28:28.414882   81309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:28:28.414908   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:28.415254   81309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:28:28.415289   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:28.418228   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.418602   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.418625   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.418699   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:28.418871   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.419024   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:28.419188   81309 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa Username:docker}
	I0603 12:28:28.505969   81309 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:28:28.510277   81309 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:28:28.510301   81309 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:28:28.510357   81309 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:28:28.510425   81309 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:28:28.510514   81309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:28:28.519961   81309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:28:28.543780   81309 start.go:296] duration metric: took 128.898971ms for postStartSetup
	I0603 12:28:28.543814   81309 fix.go:56] duration metric: took 18.58715664s for fixHost
	I0603 12:28:28.543832   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:28.546353   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.546705   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.546732   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.546870   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:28.547097   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.547281   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.547437   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:28.547740   81309 main.go:141] libmachine: Using SSH client type: native
	I0603 12:28:28.547994   81309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0603 12:28:28.548021   81309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:28:28.663709   81309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417708.639776582
	
	I0603 12:28:28.663731   81309 fix.go:216] guest clock: 1717417708.639776582
	I0603 12:28:28.663737   81309 fix.go:229] Guest: 2024-06-03 12:28:28.639776582 +0000 UTC Remote: 2024-06-03 12:28:28.543817407 +0000 UTC m=+18.717067670 (delta=95.959175ms)
	I0603 12:28:28.663770   81309 fix.go:200] guest clock delta is within tolerance: 95.959175ms
	I0603 12:28:28.663775   81309 start.go:83] releasing machines lock for "newest-cni-756935", held for 18.707142433s
	I0603 12:28:28.663793   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:28.664036   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetIP
	I0603 12:28:28.666768   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.667169   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.667191   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.667352   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:28.667887   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:28.668069   81309 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:28:28.668154   81309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:28:28.668198   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:28.668318   81309 ssh_runner.go:195] Run: cat /version.json
	I0603 12:28:28.668341   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHHostname
	I0603 12:28:28.670751   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.670977   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.671080   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.671110   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.671235   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:28.671295   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:28.671342   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:28.671390   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.671513   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHPort
	I0603 12:28:28.671613   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:28.671677   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHKeyPath
	I0603 12:28:28.671742   81309 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa Username:docker}
	I0603 12:28:28.671785   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetSSHUsername
	I0603 12:28:28.671922   81309 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa Username:docker}
	I0603 12:28:28.752500   81309 ssh_runner.go:195] Run: systemctl --version
	I0603 12:28:28.774899   81309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:28:28.927371   81309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:28:28.933410   81309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:28:28.933470   81309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:28:28.948650   81309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:28:28.948670   81309 start.go:494] detecting cgroup driver to use...
	I0603 12:28:28.948716   81309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:28:28.964473   81309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:28:28.977649   81309 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:28:28.977688   81309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:28:28.990529   81309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:28:29.004395   81309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:28:29.123128   81309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:28:29.278913   81309 docker.go:233] disabling docker service ...
	I0603 12:28:29.278969   81309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:28:29.293464   81309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:28:29.306286   81309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:28:29.417876   81309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:28:29.529718   81309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:28:29.544894   81309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:28:29.565422   81309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:28:29.565477   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.576092   81309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:28:29.576151   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.586646   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.596862   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.606931   81309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:28:29.617441   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.627429   81309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.644493   81309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:28:29.654395   81309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:28:29.663908   81309 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:28:29.663957   81309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:28:29.677366   81309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:28:29.686386   81309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:28:29.792934   81309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:28:29.923272   81309 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:28:29.923349   81309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:28:29.928618   81309 start.go:562] Will wait 60s for crictl version
	I0603 12:28:29.928680   81309 ssh_runner.go:195] Run: which crictl
	I0603 12:28:29.932578   81309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:28:29.972536   81309 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:28:29.972618   81309 ssh_runner.go:195] Run: crio --version
	I0603 12:28:30.002215   81309 ssh_runner.go:195] Run: crio --version
	I0603 12:28:30.031378   81309 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:28:30.032642   81309 main.go:141] libmachine: (newest-cni-756935) Calling .GetIP
	I0603 12:28:30.035289   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:30.035668   81309 main.go:141] libmachine: (newest-cni-756935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a0", ip: ""} in network mk-newest-cni-756935: {Iface:virbr1 ExpiryTime:2024-06-03 13:28:20 +0000 UTC Type:0 Mac:52:54:00:fc:11:a0 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:newest-cni-756935 Clientid:01:52:54:00:fc:11:a0}
	I0603 12:28:30.035694   81309 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined IP address 192.168.39.127 and MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:28:30.035915   81309 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:28:30.039971   81309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:28:30.054400   81309 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.238276736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417712238248311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa6970f3-e50d-4975-9f72-5a43cde44598 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.239003284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da15c91b-f25f-4040-9ac5-022c6bbdde13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.239067302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da15c91b-f25f-4040-9ac5-022c6bbdde13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.239235799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da15c91b-f25f-4040-9ac5-022c6bbdde13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.286684995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5c41821-ca81-4938-a491-3f4f82416058 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.286824113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5c41821-ca81-4938-a491-3f4f82416058 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.288250863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8287ad26-3f99-4ea5-949b-f2dff22806dd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.288651492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417712288629482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8287ad26-3f99-4ea5-949b-f2dff22806dd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.289177252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08017c53-be79-44ac-a475-31dbb2cf33cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.289248459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08017c53-be79-44ac-a475-31dbb2cf33cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.289426224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08017c53-be79-44ac-a475-31dbb2cf33cd name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.333037514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e691a54-3ccd-4f3c-ab94-9c52a947f848 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.333173157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e691a54-3ccd-4f3c-ab94-9c52a947f848 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.334656523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12d5456f-01b4-4180-b351-e838ba1aa463 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.335128727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417712335104890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12d5456f-01b4-4180-b351-e838ba1aa463 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.335635545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b063ac5-5668-4b4e-aa59-a858aacf3b78 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.335751614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b063ac5-5668-4b4e-aa59-a858aacf3b78 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.336018866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b063ac5-5668-4b4e-aa59-a858aacf3b78 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.373677515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e026d8-5f44-48b6-b036-f63c2347e0c0 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.373835322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e026d8-5f44-48b6-b036-f63c2347e0c0 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.375337365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b7ec1ef-f9b2-4896-aa6b-6dfeb577be3d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.375977946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417712375950992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b7ec1ef-f9b2-4896-aa6b-6dfeb577be3d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.376761224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83c5d8c5-ca55-47df-8307-b2debc57c171 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.376818261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83c5d8c5-ca55-47df-8307-b2debc57c171 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:28:32 default-k8s-diff-port-196710 crio[718]: time="2024-06-03 12:28:32.377035711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be,PodSandboxId:bb07783cf2f0189056abe938846b8704a51bc93e387368309ff3fe1803ba0f50,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416738895876479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc80b69-d8f9-4d6a-9bf4-4a41d875a735,},Annotations:map[string]string{io.kubernetes.container.hash: 2581e734,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0,PodSandboxId:969b22e069ac9a53ed92dd28a15c7a6b2f9aefe4297bd0ad86363dee073ab272,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738414293764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pbndv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91d83622-9883-407e-b0f4-eb2d18cd2483,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7c2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543,PodSandboxId:14cc90803a07908405a43112b39a140d20f4cafe7439893777a371946dc4cc46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416738304111473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fvgqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c908a302-8c40-46aa-9e98-92baa297a7ed,},Annotations:map[string]string{io.kubernetes.container.hash: 79697988,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c,PodSandboxId:a2764dd88f0a0dfd7eaffd42fd29874e5c1d62454db4d8bf43da24803581300d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1717416737667169029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4gzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e603f37-93e0-429d-97b8-e9b997c26101,},Annotations:map[string]string{io.kubernetes.container.hash: 54df7384,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80,PodSandboxId:92c573d6e4e9264edc09c4988c8d0a23d78a7967e054a0f2021af5bfb5b664df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171741671750593990
3,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6295a4fec0c60d8c3d9920313cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 24735c84,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f,PodSandboxId:0da5999e2969085c068480e8b107353c83cdd0ab313203bf9e95041d129fe2b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416717492874076,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4bc6d209f2d0bf892bab0b260232a49,},Annotations:map[string]string{io.kubernetes.container.hash: 264b8a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d,PodSandboxId:5991627e3ce5c5eb675b1eb04e964578d6bdf31e04fba24fdf5869bea146181b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416717528870201,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab2734d7cba77ab32aa054371b78738,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051,PodSandboxId:1294592d2da1d354e97d2823a3479688b1deead2dd58f93b5aa972adda9a5f7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416717452883347,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-196710,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60475779e4c9a7355be04c17bf5751a8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83c5d8c5-ca55-47df-8307-b2debc57c171 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f837113d05b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   bb07783cf2f01       storage-provisioner
	38f81b9ecc23e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   969b22e069ac9       coredns-7db6d8ff4d-pbndv
	55f5cd73dfa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   14cc90803a079       coredns-7db6d8ff4d-fvgqr
	ba8f260f4f147       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   16 minutes ago      Running             kube-proxy                0                   a2764dd88f0a0       kube-proxy-j4gzg
	dde9542f2848b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   16 minutes ago      Running             kube-controller-manager   2                   5991627e3ce5c       kube-controller-manager-default-k8s-diff-port-196710
	5b36790864877       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   16 minutes ago      Running             kube-apiserver            2                   92c573d6e4e92       kube-apiserver-default-k8s-diff-port-196710
	b97df11bb5da4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   0da5999e29690       etcd-default-k8s-diff-port-196710
	458d45e7061f1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   16 minutes ago      Running             kube-scheduler            2                   1294592d2da1d       kube-scheduler-default-k8s-diff-port-196710
	
	
	==> coredns [38f81b9ecc23e0288f01cdb7927b2262c7c4829c009d526d0191ed082a1e4fa0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [55f5cd73dfa8fa07fbabc94bea14e9b6986664d022ceb7197c03f85ca5ad7543] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-196710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-196710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=default-k8s-diff-port-196710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-196710
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:28:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:27:39 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:27:39 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:27:39 +0000   Mon, 03 Jun 2024 12:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:27:39 +0000   Mon, 03 Jun 2024 12:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.60
	  Hostname:    default-k8s-diff-port-196710
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 30cc90e3d4ba4851bf3941aebea98abf
	  System UUID:                30cc90e3-d4ba-4851-bf39-41aebea98abf
	  Boot ID:                    8d17ce40-dc25-4e83-ab19-730863a4a2c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fvgqr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-pbndv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-196710                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-196710             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-196710    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-j4gzg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-196710             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-lxvbp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-196710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-196710 event: Registered Node default-k8s-diff-port-196710 in Controller
	
	
	==> dmesg <==
	[  +0.045057] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.367168] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:07] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.056268] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071079] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.181169] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.160826] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.331192] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.611511] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.061098] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.655618] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +5.704467] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.225727] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.951922] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 3 12:11] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.474182] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[Jun 3 12:12] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.638418] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[ +14.408380] systemd-fstab-generator[4101]: Ignoring "noauto" option for root device
	[  +0.099724] kauditd_printk_skb: 14 callbacks suppressed
	[Jun 3 12:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [b97df11bb5da4ba695d9765453b0f9f37298d6e914ae7586c0359aa3c72a6a4f] <==
	{"level":"info","ts":"2024-06-03T12:11:58.326005Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5950dcfe76ab9ff7","local-member-attributes":"{Name:default-k8s-diff-port-196710 ClientURLs:[https://192.168.61.60:2379]}","request-path":"/0/members/5950dcfe76ab9ff7/attributes","cluster-id":"5bb75673341f887b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:11:58.327739Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.328111Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.328252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:11:58.334223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:11:58.334325Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5bb75673341f887b","local-member-id":"5950dcfe76ab9ff7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.334399Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.334435Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:11:58.337819Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.337854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:11:58.343222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.60:2379"}
	{"level":"info","ts":"2024-06-03T12:21:58.437568Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-06-03T12:21:58.447282Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":712,"took":"8.842246ms","hash":2717374230,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2236416,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-03T12:21:58.447369Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2717374230,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2024-06-03T12:26:58.444985Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":956}
	{"level":"info","ts":"2024-06-03T12:26:58.449814Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":956,"took":"4.380624ms","hash":3631516958,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-03T12:26:58.44987Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3631516958,"revision":956,"compact-revision":712}
	{"level":"info","ts":"2024-06-03T12:27:36.117118Z","caller":"traceutil/trace.go:171","msg":"trace[1296569464] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"190.06495ms","start":"2024-06-03T12:27:35.927017Z","end":"2024-06-03T12:27:36.117082Z","steps":["trace[1296569464] 'process raft request'  (duration: 189.967332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:27:36.367861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.114078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:27:36.367972Z","caller":"traceutil/trace.go:171","msg":"trace[739412824] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1231; }","duration":"124.516165ms","start":"2024-06-03T12:27:36.243441Z","end":"2024-06-03T12:27:36.367958Z","steps":["trace[739412824] 'count revisions from in-memory index tree'  (duration: 123.984775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:27:37.393898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.800408ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11526839954981130717 > lease_revoke:<id:1ff78fde036e1196>","response":"size:27"}
	{"level":"info","ts":"2024-06-03T12:27:37.394Z","caller":"traceutil/trace.go:171","msg":"trace[1633994126] linearizableReadLoop","detail":"{readStateIndex:1434; appliedIndex:1433; }","duration":"356.467246ms","start":"2024-06-03T12:27:37.037517Z","end":"2024-06-03T12:27:37.393984Z","steps":["trace[1633994126] 'read index received'  (duration: 96.233874ms)","trace[1633994126] 'applied index is now lower than readState.Index'  (duration: 260.232267ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T12:27:37.394072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.565365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:27:37.394091Z","caller":"traceutil/trace.go:171","msg":"trace[1276225728] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1231; }","duration":"356.608442ms","start":"2024-06-03T12:27:37.037475Z","end":"2024-06-03T12:27:37.394084Z","steps":["trace[1276225728] 'agreement among raft nodes before linearized reading'  (duration: 356.55865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:27:37.394125Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:27:37.037462Z","time spent":"356.651315ms","remote":"127.0.0.1:43452","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 12:28:32 up 21 min,  0 users,  load average: 0.21, 0.19, 0.12
	Linux default-k8s-diff-port-196710 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b367908648772f0d2858869b62792dda2fa40783a9edb86115c821eb7424e80] <==
	I0603 12:23:01.038429       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:25:01.038069       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:25:01.038151       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:25:01.038167       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:25:01.039296       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:25:01.039520       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:25:01.039598       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:27:00.042119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:00.042521       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:27:01.042867       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:01.042937       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:27:01.042949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:27:01.043042       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:27:01.043174       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:27:01.044387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:28:01.043305       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:28:01.043446       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:28:01.043455       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:28:01.044600       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:28:01.044856       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:28:01.044915       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dde9542f2848b9387f40071ed004994aebbf0e7b76409c9adff20fab6b868f6d] <==
	I0603 12:22:46.749891       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:23:16.272819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:16.758615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:23:22.882641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="85.359µs"
	I0603 12:23:37.880815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="261.972µs"
	E0603 12:23:46.278810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:46.773338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:24:16.284693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:16.781346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:24:46.290545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:46.789498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:16.295584       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:16.797675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:46.301087       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:46.805421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:16.306803       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:16.813910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:46.312464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:46.827472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:27:16.317929       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:27:16.835184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:27:46.323446       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:27:46.845096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:28:16.330041       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:28:16.853395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ba8f260f4f1470e1baa981a7b6c5a8b69258e02906671f2ff9d5b6da4130643c] <==
	I0603 12:12:18.080204       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:12:18.354128       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.60"]
	I0603 12:12:18.801660       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:12:18.803081       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:12:18.803169       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:12:18.807556       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:12:18.809090       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:12:18.809410       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:12:18.811029       1 config.go:192] "Starting service config controller"
	I0603 12:12:18.811063       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:12:18.811143       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:12:18.811161       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:12:18.811900       1 config.go:319] "Starting node config controller"
	I0603 12:12:18.811936       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:12:18.912003       1 shared_informer.go:320] Caches are synced for node config
	I0603 12:12:18.912063       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:12:18.912102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [458d45e7061f123547eb39c6b4c985a9a06f0399195f1c6137493845029be051] <==
	W0603 12:12:00.035055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:12:00.035091       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:12:00.035143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.035172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:00.854395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:12:00.854453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:12:00.891275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:00.891337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:00.948612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:00.949232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:00.996342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:00.996685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:12:01.033007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:01.033059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:01.045058       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:01.045142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:01.177617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:01.177646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:01.244165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:01.244240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:01.262102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:01.262468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:12:01.298028       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:01.298143       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:04.517128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:26:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:26:13 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:26:13.867240    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:26:28 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:26:28.867381    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:26:39 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:26:39.867251    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:26:54 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:26:54.866564    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:27:02 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:02.887859    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:27:02 default-k8s-diff-port-196710 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:27:02 default-k8s-diff-port-196710 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:27:02 default-k8s-diff-port-196710 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:27:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:27:08 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:08.866591    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:27:20 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:20.866244    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:27:31 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:31.868087    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:27:43 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:43.866752    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:27:57 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:27:57.869592    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:28:02 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:02.885114    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:28:02 default-k8s-diff-port-196710 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:28:02 default-k8s-diff-port-196710 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:28:02 default-k8s-diff-port-196710 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:28:02 default-k8s-diff-port-196710 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:28:09 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:09.867182    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	Jun 03 12:28:22 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:22.910023    3904 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 03 12:28:22 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:22.910229    3904 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 03 12:28:22 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:22.911089    3904 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24ljb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-lxvbp_kube-system(36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 03 12:28:22 default-k8s-diff-port-196710 kubelet[3904]: E0603 12:28:22.911186    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-lxvbp" podUID="36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f"
	
	
	==> storage-provisioner [3f837113d05b0531663797495d73bc896224b9a6ab02d0fe3c02cd3c156895be] <==
	I0603 12:12:19.024003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:12:19.087620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:12:19.087800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:12:19.200059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:12:19.200196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90!
	I0603 12:12:19.202832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ae0bc27-769d-44cb-9d0e-4216ece97ab8", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90 became leader
	I0603 12:12:19.300509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-196710_1de1ea92-7376-4d57-816b-e247ab67fb90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lxvbp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp: exit status 1 (64.375916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lxvbp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-196710 describe pod metrics-server-569cc877fc-lxvbp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (426.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (316.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725022 -n embed-certs-725022
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 12:27:25.820630469 +0000 UTC m=+6545.725050351
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-725022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-725022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.865µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-725022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-725022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-725022 logs -n 25: (1.17076217s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	| start   | -p newest-cni-756935 --memory=2200 --alsologtostderr   | newest-cni-756935            | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:27:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:27:04.275414   80344 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:27:04.275696   80344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:04.275707   80344 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:04.275711   80344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:04.275936   80344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:27:04.276602   80344 out.go:298] Setting JSON to false
	I0603 12:27:04.277624   80344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7769,"bootTime":1717409855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:27:04.277682   80344 start.go:139] virtualization: kvm guest
	I0603 12:27:04.279985   80344 out.go:177] * [newest-cni-756935] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:27:04.281962   80344 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:27:04.281923   80344 notify.go:220] Checking for updates...
	I0603 12:27:04.283266   80344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:27:04.284793   80344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:27:04.286045   80344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:27:04.287414   80344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:27:04.288611   80344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:27:04.290220   80344 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290336   80344 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290440   80344 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:04.290543   80344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:27:04.328615   80344 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:27:04.329729   80344 start.go:297] selected driver: kvm2
	I0603 12:27:04.329747   80344 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:27:04.329762   80344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:27:04.330714   80344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:27:04.330792   80344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:27:04.346317   80344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:27:04.346374   80344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0603 12:27:04.346434   80344 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0603 12:27:04.346722   80344 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 12:27:04.346781   80344 cni.go:84] Creating CNI manager for ""
	I0603 12:27:04.346793   80344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:27:04.346800   80344 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:27:04.346856   80344 start.go:340] cluster config:
	{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:27:04.346953   80344 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:27:04.348876   80344 out.go:177] * Starting "newest-cni-756935" primary control-plane node in "newest-cni-756935" cluster
	I0603 12:27:04.349993   80344 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:27:04.350031   80344 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:27:04.350043   80344 cache.go:56] Caching tarball of preloaded images
	I0603 12:27:04.350128   80344 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:27:04.350138   80344 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:27:04.350216   80344 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json ...
	I0603 12:27:04.350232   80344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/newest-cni-756935/config.json: {Name:mke47539e9b14ee756d0e1756e2aee20fecc5c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:27:04.350350   80344 start.go:360] acquireMachinesLock for newest-cni-756935: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:27:04.350377   80344 start.go:364] duration metric: took 14.246µs to acquireMachinesLock for "newest-cni-756935"
	I0603 12:27:04.350393   80344 start.go:93] Provisioning new machine with config: &{Name:newest-cni-756935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-756935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:27:04.350447   80344 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 12:27:04.351920   80344 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:27:04.352044   80344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:27:04.352085   80344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:27:04.365657   80344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0603 12:27:04.366129   80344 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:27:04.366708   80344 main.go:141] libmachine: Using API Version  1
	I0603 12:27:04.366731   80344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:27:04.367010   80344 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:27:04.367220   80344 main.go:141] libmachine: (newest-cni-756935) Calling .GetMachineName
	I0603 12:27:04.367357   80344 main.go:141] libmachine: (newest-cni-756935) Calling .DriverName
	I0603 12:27:04.367514   80344 start.go:159] libmachine.API.Create for "newest-cni-756935" (driver="kvm2")
	I0603 12:27:04.367545   80344 client.go:168] LocalClient.Create starting
	I0603 12:27:04.367576   80344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem
	I0603 12:27:04.367608   80344 main.go:141] libmachine: Decoding PEM data...
	I0603 12:27:04.367621   80344 main.go:141] libmachine: Parsing certificate...
	I0603 12:27:04.367666   80344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem
	I0603 12:27:04.367682   80344 main.go:141] libmachine: Decoding PEM data...
	I0603 12:27:04.367693   80344 main.go:141] libmachine: Parsing certificate...
	I0603 12:27:04.367708   80344 main.go:141] libmachine: Running pre-create checks...
	I0603 12:27:04.367716   80344 main.go:141] libmachine: (newest-cni-756935) Calling .PreCreateCheck
	I0603 12:27:04.368011   80344 main.go:141] libmachine: (newest-cni-756935) Calling .GetConfigRaw
	I0603 12:27:04.368418   80344 main.go:141] libmachine: Creating machine...
	I0603 12:27:04.368430   80344 main.go:141] libmachine: (newest-cni-756935) Calling .Create
	I0603 12:27:04.368562   80344 main.go:141] libmachine: (newest-cni-756935) Creating KVM machine...
	I0603 12:27:04.369670   80344 main.go:141] libmachine: (newest-cni-756935) DBG | found existing default KVM network
	I0603 12:27:04.371088   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:04.370947   80367 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f800}
	I0603 12:27:04.371123   80344 main.go:141] libmachine: (newest-cni-756935) DBG | created network xml: 
	I0603 12:27:04.371138   80344 main.go:141] libmachine: (newest-cni-756935) DBG | <network>
	I0603 12:27:04.371170   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   <name>mk-newest-cni-756935</name>
	I0603 12:27:04.371207   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   <dns enable='no'/>
	I0603 12:27:04.371220   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   
	I0603 12:27:04.371230   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 12:27:04.371240   80344 main.go:141] libmachine: (newest-cni-756935) DBG |     <dhcp>
	I0603 12:27:04.371246   80344 main.go:141] libmachine: (newest-cni-756935) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 12:27:04.371254   80344 main.go:141] libmachine: (newest-cni-756935) DBG |     </dhcp>
	I0603 12:27:04.371262   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   </ip>
	I0603 12:27:04.371339   80344 main.go:141] libmachine: (newest-cni-756935) DBG |   
	I0603 12:27:04.371365   80344 main.go:141] libmachine: (newest-cni-756935) DBG | </network>
	I0603 12:27:04.371377   80344 main.go:141] libmachine: (newest-cni-756935) DBG | 
	I0603 12:27:04.376085   80344 main.go:141] libmachine: (newest-cni-756935) DBG | trying to create private KVM network mk-newest-cni-756935 192.168.39.0/24...
	I0603 12:27:04.444801   80344 main.go:141] libmachine: (newest-cni-756935) DBG | private KVM network mk-newest-cni-756935 192.168.39.0/24 created
	I0603 12:27:04.444827   80344 main.go:141] libmachine: (newest-cni-756935) Setting up store path in /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935 ...
	I0603 12:27:04.444849   80344 main.go:141] libmachine: (newest-cni-756935) Building disk image from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:27:04.444917   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:04.444854   80367 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:27:04.445079   80344 main.go:141] libmachine: (newest-cni-756935) Downloading /home/jenkins/minikube-integration/19008-7755/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:27:04.690944   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:04.690819   80367 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/id_rsa...
	I0603 12:27:04.899996   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:04.899874   80367 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/newest-cni-756935.rawdisk...
	I0603 12:27:04.900030   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Writing magic tar header
	I0603 12:27:04.900046   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Writing SSH key tar header
	I0603 12:27:04.900058   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:04.899999   80367 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935 ...
	I0603 12:27:04.900136   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935
	I0603 12:27:04.900162   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935 (perms=drwx------)
	I0603 12:27:04.900177   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:27:04.900194   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube/machines
	I0603 12:27:04.900210   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755/.minikube (perms=drwxr-xr-x)
	I0603 12:27:04.900228   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins/minikube-integration/19008-7755 (perms=drwxrwxr-x)
	I0603 12:27:04.900242   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:27:04.900273   80344 main.go:141] libmachine: (newest-cni-756935) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:27:04.900289   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:27:04.900296   80344 main.go:141] libmachine: (newest-cni-756935) Creating domain...
	I0603 12:27:04.900311   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19008-7755
	I0603 12:27:04.900324   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:27:04.900336   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:27:04.900345   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Checking permissions on dir: /home
	I0603 12:27:04.900382   80344 main.go:141] libmachine: (newest-cni-756935) DBG | Skipping /home - not owner
	I0603 12:27:04.901580   80344 main.go:141] libmachine: (newest-cni-756935) define libvirt domain using xml: 
	I0603 12:27:04.901603   80344 main.go:141] libmachine: (newest-cni-756935) <domain type='kvm'>
	I0603 12:27:04.901613   80344 main.go:141] libmachine: (newest-cni-756935)   <name>newest-cni-756935</name>
	I0603 12:27:04.901621   80344 main.go:141] libmachine: (newest-cni-756935)   <memory unit='MiB'>2200</memory>
	I0603 12:27:04.901630   80344 main.go:141] libmachine: (newest-cni-756935)   <vcpu>2</vcpu>
	I0603 12:27:04.901641   80344 main.go:141] libmachine: (newest-cni-756935)   <features>
	I0603 12:27:04.901652   80344 main.go:141] libmachine: (newest-cni-756935)     <acpi/>
	I0603 12:27:04.901661   80344 main.go:141] libmachine: (newest-cni-756935)     <apic/>
	I0603 12:27:04.901671   80344 main.go:141] libmachine: (newest-cni-756935)     <pae/>
	I0603 12:27:04.901679   80344 main.go:141] libmachine: (newest-cni-756935)     
	I0603 12:27:04.901687   80344 main.go:141] libmachine: (newest-cni-756935)   </features>
	I0603 12:27:04.901699   80344 main.go:141] libmachine: (newest-cni-756935)   <cpu mode='host-passthrough'>
	I0603 12:27:04.901714   80344 main.go:141] libmachine: (newest-cni-756935)   
	I0603 12:27:04.901733   80344 main.go:141] libmachine: (newest-cni-756935)   </cpu>
	I0603 12:27:04.901741   80344 main.go:141] libmachine: (newest-cni-756935)   <os>
	I0603 12:27:04.901751   80344 main.go:141] libmachine: (newest-cni-756935)     <type>hvm</type>
	I0603 12:27:04.901763   80344 main.go:141] libmachine: (newest-cni-756935)     <boot dev='cdrom'/>
	I0603 12:27:04.901772   80344 main.go:141] libmachine: (newest-cni-756935)     <boot dev='hd'/>
	I0603 12:27:04.901783   80344 main.go:141] libmachine: (newest-cni-756935)     <bootmenu enable='no'/>
	I0603 12:27:04.901807   80344 main.go:141] libmachine: (newest-cni-756935)   </os>
	I0603 12:27:04.901826   80344 main.go:141] libmachine: (newest-cni-756935)   <devices>
	I0603 12:27:04.901836   80344 main.go:141] libmachine: (newest-cni-756935)     <disk type='file' device='cdrom'>
	I0603 12:27:04.901847   80344 main.go:141] libmachine: (newest-cni-756935)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/boot2docker.iso'/>
	I0603 12:27:04.901856   80344 main.go:141] libmachine: (newest-cni-756935)       <target dev='hdc' bus='scsi'/>
	I0603 12:27:04.901863   80344 main.go:141] libmachine: (newest-cni-756935)       <readonly/>
	I0603 12:27:04.901869   80344 main.go:141] libmachine: (newest-cni-756935)     </disk>
	I0603 12:27:04.901877   80344 main.go:141] libmachine: (newest-cni-756935)     <disk type='file' device='disk'>
	I0603 12:27:04.901884   80344 main.go:141] libmachine: (newest-cni-756935)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:27:04.901894   80344 main.go:141] libmachine: (newest-cni-756935)       <source file='/home/jenkins/minikube-integration/19008-7755/.minikube/machines/newest-cni-756935/newest-cni-756935.rawdisk'/>
	I0603 12:27:04.901902   80344 main.go:141] libmachine: (newest-cni-756935)       <target dev='hda' bus='virtio'/>
	I0603 12:27:04.901909   80344 main.go:141] libmachine: (newest-cni-756935)     </disk>
	I0603 12:27:04.901915   80344 main.go:141] libmachine: (newest-cni-756935)     <interface type='network'>
	I0603 12:27:04.901925   80344 main.go:141] libmachine: (newest-cni-756935)       <source network='mk-newest-cni-756935'/>
	I0603 12:27:04.901955   80344 main.go:141] libmachine: (newest-cni-756935)       <model type='virtio'/>
	I0603 12:27:04.901979   80344 main.go:141] libmachine: (newest-cni-756935)     </interface>
	I0603 12:27:04.901994   80344 main.go:141] libmachine: (newest-cni-756935)     <interface type='network'>
	I0603 12:27:04.902005   80344 main.go:141] libmachine: (newest-cni-756935)       <source network='default'/>
	I0603 12:27:04.902017   80344 main.go:141] libmachine: (newest-cni-756935)       <model type='virtio'/>
	I0603 12:27:04.902027   80344 main.go:141] libmachine: (newest-cni-756935)     </interface>
	I0603 12:27:04.902038   80344 main.go:141] libmachine: (newest-cni-756935)     <serial type='pty'>
	I0603 12:27:04.902049   80344 main.go:141] libmachine: (newest-cni-756935)       <target port='0'/>
	I0603 12:27:04.902060   80344 main.go:141] libmachine: (newest-cni-756935)     </serial>
	I0603 12:27:04.902073   80344 main.go:141] libmachine: (newest-cni-756935)     <console type='pty'>
	I0603 12:27:04.902091   80344 main.go:141] libmachine: (newest-cni-756935)       <target type='serial' port='0'/>
	I0603 12:27:04.902102   80344 main.go:141] libmachine: (newest-cni-756935)     </console>
	I0603 12:27:04.902115   80344 main.go:141] libmachine: (newest-cni-756935)     <rng model='virtio'>
	I0603 12:27:04.902127   80344 main.go:141] libmachine: (newest-cni-756935)       <backend model='random'>/dev/random</backend>
	I0603 12:27:04.902152   80344 main.go:141] libmachine: (newest-cni-756935)     </rng>
	I0603 12:27:04.902166   80344 main.go:141] libmachine: (newest-cni-756935)     
	I0603 12:27:04.902178   80344 main.go:141] libmachine: (newest-cni-756935)     
	I0603 12:27:04.902184   80344 main.go:141] libmachine: (newest-cni-756935)   </devices>
	I0603 12:27:04.902194   80344 main.go:141] libmachine: (newest-cni-756935) </domain>
	I0603 12:27:04.902202   80344 main.go:141] libmachine: (newest-cni-756935) 
	I0603 12:27:04.906908   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:77:c0:08 in network default
	I0603 12:27:04.907451   80344 main.go:141] libmachine: (newest-cni-756935) Ensuring networks are active...
	I0603 12:27:04.907477   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:04.908190   80344 main.go:141] libmachine: (newest-cni-756935) Ensuring network default is active
	I0603 12:27:04.908614   80344 main.go:141] libmachine: (newest-cni-756935) Ensuring network mk-newest-cni-756935 is active
	I0603 12:27:04.909284   80344 main.go:141] libmachine: (newest-cni-756935) Getting domain xml...
	I0603 12:27:04.910393   80344 main.go:141] libmachine: (newest-cni-756935) Creating domain...
	I0603 12:27:06.169268   80344 main.go:141] libmachine: (newest-cni-756935) Waiting to get IP...
	I0603 12:27:06.169996   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:06.170495   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:06.170516   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:06.170474   80367 retry.go:31] will retry after 303.035196ms: waiting for machine to come up
	I0603 12:27:06.474940   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:06.475460   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:06.475486   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:06.475416   80367 retry.go:31] will retry after 248.394267ms: waiting for machine to come up
	I0603 12:27:06.725890   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:06.726385   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:06.726407   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:06.726348   80367 retry.go:31] will retry after 462.278296ms: waiting for machine to come up
	I0603 12:27:07.189866   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:07.190385   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:07.190412   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:07.190345   80367 retry.go:31] will retry after 379.593415ms: waiting for machine to come up
	I0603 12:27:07.571950   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:07.572441   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:07.572473   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:07.572377   80367 retry.go:31] will retry after 473.682836ms: waiting for machine to come up
	I0603 12:27:08.048083   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:08.048719   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:08.048746   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:08.048659   80367 retry.go:31] will retry after 807.182433ms: waiting for machine to come up
	I0603 12:27:08.857554   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:08.857966   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:08.857994   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:08.857942   80367 retry.go:31] will retry after 976.720983ms: waiting for machine to come up
	I0603 12:27:09.836238   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:09.836689   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:09.836717   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:09.836647   80367 retry.go:31] will retry after 909.433442ms: waiting for machine to come up
	I0603 12:27:11.044245   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:11.044711   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:11.044752   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:11.044661   80367 retry.go:31] will retry after 1.605048217s: waiting for machine to come up
	I0603 12:27:12.652225   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:12.652688   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:12.652711   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:12.652643   80367 retry.go:31] will retry after 2.164797027s: waiting for machine to come up
	I0603 12:27:14.819162   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:14.819607   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:14.819626   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:14.819556   80367 retry.go:31] will retry after 2.124726491s: waiting for machine to come up
	I0603 12:27:16.946168   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:16.946704   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:16.946729   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:16.946664   80367 retry.go:31] will retry after 3.007426818s: waiting for machine to come up
	I0603 12:27:19.955393   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:19.955844   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:19.955872   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:19.955772   80367 retry.go:31] will retry after 3.212622144s: waiting for machine to come up
	I0603 12:27:23.171661   80344 main.go:141] libmachine: (newest-cni-756935) DBG | domain newest-cni-756935 has defined MAC address 52:54:00:fc:11:a0 in network mk-newest-cni-756935
	I0603 12:27:23.172097   80344 main.go:141] libmachine: (newest-cni-756935) DBG | unable to find current IP address of domain newest-cni-756935 in network mk-newest-cni-756935
	I0603 12:27:23.172127   80344 main.go:141] libmachine: (newest-cni-756935) DBG | I0603 12:27:23.172055   80367 retry.go:31] will retry after 4.335904922s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.464210375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f642ffc-c0c9-458e-87ff-6ae2c43b790a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.466462946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f32e9e9-b912-4e5f-bcff-7d0f8e8a02da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.467098910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417646467066688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f32e9e9-b912-4e5f-bcff-7d0f8e8a02da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.467928990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=907ae17b-59bd-4fa4-9723-36dc7fd36ce8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.468005349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=907ae17b-59bd-4fa4-9723-36dc7fd36ce8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.468280817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=907ae17b-59bd-4fa4-9723-36dc7fd36ce8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.511477763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fec183ce-f373-447b-b8f6-5f85ecf25718 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.511539939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fec183ce-f373-447b-b8f6-5f85ecf25718 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.512878112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10ae9083-e34f-44d6-8280-04adcc3c3f58 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.513283618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417646513258491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10ae9083-e34f-44d6-8280-04adcc3c3f58 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.513844702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1fff150-84f7-42f6-b895-82a816275997 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.513896950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1fff150-84f7-42f6-b895-82a816275997 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.514274327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1fff150-84f7-42f6-b895-82a816275997 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.521111213Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b500ce3c-37fd-4ce4-8fac-f47b9413a479 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.521308872Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8d2f709632ec4026c7d32eaf68022f7c92c91ca00e815fe7de50a9a7b53b4e8d,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-jgmbs,Uid:148d8ece-e094-4df9-989a-1bc59a33b7ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416784006928264,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-jgmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148d8ece-e094-4df9-989a-1bc59a33b7ca,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:13:03.698947317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cde9aa2d-6a26-4f83-b5df-ae24b22df27a,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416783681937131,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T12:13:03.374258466Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4gbj2,Uid:0e46c731-84e4-4cb2-8125-2b61c10916a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416782566534429,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e46c731-84e4-4cb2-8125-2b61c10916a3,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:13:02.251255387Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-x9fw5,Uid:1ed6c0e0-2d13-410f
-bdf1-6620fb2503ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416782545265956,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:13:02.229083819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-7qp6h,Uid:7869cd1d-785d-401d-aceb-854cffd63d73,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416782311189880,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:13:02.001478342Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-725022,Uid:38effa66b97159d08749fd23b6d37e6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416762741213785,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 38effa66b97159d08749fd23b6d37e6f,kubernetes.io/config.seen: 2024-06-03T12:12:42.299769698Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-725022,Uid:6cd351b07ac0ddcdf3965a97f9c3e0b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416762738540081,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6cd351b07ac0ddcdf3965a97f9c3e0b5,kubernetes.io/config.seen: 2024-06-03T12:12:42.299768750Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-725022,Uid:3a8815b41f8de9ce6a4245aba1cc52be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416762732538807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.245:2379,kubernetes.io/config.hash: 3a8815b41f8de9ce6a4245aba1cc52be,kubernetes.io/config.seen: 2024-06-03T12:12:42.299741831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-725022,Uid:e29b26fbef49942c734e3993559250ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717416762728992064,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.7
2.245:8443,kubernetes.io/config.hash: e29b26fbef49942c734e3993559250ae,kubernetes.io/config.seen: 2024-06-03T12:12:42.299767127Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b500ce3c-37fd-4ce4-8fac-f47b9413a479 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.522266694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c06ef68-78f5-44e9-939d-4a978bae5b54 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.522346625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c06ef68-78f5-44e9-939d-4a978bae5b54 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.522574712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c06ef68-78f5-44e9-939d-4a978bae5b54 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.550462641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78ca3916-6f96-4b01-b671-a538da44aef6 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.550549052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78ca3916-6f96-4b01-b671-a538da44aef6 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.551644967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12782710-ba20-4d36-ae29-fcfed8b6b9d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.552192128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417646552170508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12782710-ba20-4d36-ae29-fcfed8b6b9d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.552851445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2770346f-2255-4049-b63a-880ac8c520f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.552920181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2770346f-2255-4049-b63a-880ac8c520f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:26 embed-certs-725022 crio[714]: time="2024-06-03 12:27:26.553123122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf,PodSandboxId:a5fd62b332d2e0f47de6d2e54dc8c97d65174923bfb278dd0e94cdfd2de334ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717416783962830866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cde9aa2d-6a26-4f83-b5df-ae24b22df27a,},Annotations:map[string]string{io.kubernetes.container.hash: f90461b5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c,PodSandboxId:9408a07b4022b2322f3a058bd2a166203feefb543cc8f98fc352c0d40e2956e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783589635760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x9fw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed6c0e0-2d13-410f-bdf1-6620fb2503ed,},Annotations:map[string]string{io.kubernetes.container.hash: 151e7a97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388,PodSandboxId:d3d567d1a7ca6d85a49de00cb845f1c09098d5ad0402f2a21f402af5d745d48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717416783493324578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gbj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
e46c731-84e4-4cb2-8125-2b61c10916a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce098b15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7,PodSandboxId:9e776b8b96d7b979fed7fd4862d6218fd94255e35326bdbcd020d4eb196e26ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1717416782427911951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7869cd1d-785d-401d-aceb-854cffd63d73,},Annotations:map[string]string{io.kubernetes.container.hash: f004a87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050,PodSandboxId:ba01582226858948b03eeb5c35bad675afd2ef261e42adee83c11179d36ba8b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717416762970487271,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a8815b41f8de9ce6a4245aba1cc52be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b34b2ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68,PodSandboxId:5acdf3478feee4eb74296aa47f18358f10b4aaa5c0b7c7bcbc8dac3780de96f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717416762957568928,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38effa66b97159d08749fd23b6d37e6f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce,PodSandboxId:25651ba709a5c48e60114630f713cd72dc4776f9484d7b98583855990d1b368b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717416762997958517,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd351b07ac0ddcdf3965a97f9c3e0b5,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a,PodSandboxId:40beb5c2d8ec5a58ad01a7658540bf6827ed1b9822ddd2f18bd52cbee506b037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717416762938298324,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-725022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29b26fbef49942c734e3993559250ae,},Annotations:map[string]string{io.kubernetes.container.hash: ee6f4948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2770346f-2255-4049-b63a-880ac8c520f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81efa28c7c7dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   a5fd62b332d2e       storage-provisioner
	a7c67fb6c2145       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   9408a07b4022b       coredns-7db6d8ff4d-x9fw5
	a2de82fcd9e26       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   d3d567d1a7ca6       coredns-7db6d8ff4d-4gbj2
	60795f3f2672b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   14 minutes ago      Running             kube-proxy                0                   9e776b8b96d7b       kube-proxy-7qp6h
	2a42347479168       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 minutes ago      Running             kube-controller-manager   2                   25651ba709a5c       kube-controller-manager-embed-certs-725022
	be0185f4a9d12       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   ba01582226858       etcd-embed-certs-725022
	ef7d5692a59fa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 minutes ago      Running             kube-scheduler            2                   5acdf3478feee       kube-scheduler-embed-certs-725022
	3cce6b24a5e40       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Running             kube-apiserver            2                   40beb5c2d8ec5       kube-apiserver-embed-certs-725022
	
	
	==> coredns [a2de82fcd9e26961a754f75e28a9615763855656f2c01077e5a39ba8e39e0388] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a7c67fb6c2145a21d4a3c1ef199af7d66bd41033a039015f271ba728ca06da0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-725022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-725022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=embed-certs-725022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:12:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-725022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:27:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:23:19 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:23:19 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:23:19 +0000   Mon, 03 Jun 2024 12:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:23:19 +0000   Mon, 03 Jun 2024 12:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.245
	  Hostname:    embed-certs-725022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc393dd3c9a947b68657c20168268eeb
	  System UUID:                cc393dd3-c9a9-47b6-8657-c20168268eeb
	  Boot ID:                    36ae111d-de49-4f7f-b605-475a321541fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4gbj2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-x9fw5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-725022                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-725022             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-725022    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-7qp6h                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-725022             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-jgmbs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node embed-certs-725022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node embed-certs-725022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node embed-certs-725022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node embed-certs-725022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node embed-certs-725022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-725022 event: Registered Node embed-certs-725022 in Controller
	
	
	==> dmesg <==
	[  +0.052805] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040215] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021337] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497873] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573664] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.954650] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.058269] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066848] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.164980] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.158383] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.293994] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.383061] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +0.058654] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.768908] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.640525] kauditd_printk_skb: 97 callbacks suppressed
	[Jun 3 12:08] kauditd_printk_skb: 79 callbacks suppressed
	[Jun 3 12:12] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.873005] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[  +6.383917] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[  +0.087666] kauditd_printk_skb: 57 callbacks suppressed
	[Jun 3 12:13] systemd-fstab-generator[4138]: Ignoring "noauto" option for root device
	[  +0.146312] kauditd_printk_skb: 12 callbacks suppressed
	[Jun 3 12:14] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [be0185f4a9d1211ca7e7bbca26e8776fce45302381682c942aef1604e398e050] <==
	{"level":"info","ts":"2024-06-03T12:12:43.381568Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.245:2380"}
	{"level":"info","ts":"2024-06-03T12:12:43.381598Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.245:2380"}
	{"level":"info","ts":"2024-06-03T12:12:43.382622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 switched to configuration voters=(14110676200346457922)"}
	{"level":"info","ts":"2024-06-03T12:12:43.386799Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b0c78e7fea9a901","local-member-id":"c3d3313e1e359742","added-peer-id":"c3d3313e1e359742","added-peer-peer-urls":["https://192.168.72.245:2380"]}
	{"level":"info","ts":"2024-06-03T12:12:43.73179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 received MsgPreVoteResp from c3d3313e1e359742 at term 1"}
	{"level":"info","ts":"2024-06-03T12:12:43.731905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.73191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 received MsgVoteResp from c3d3313e1e359742 at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.731918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c3d3313e1e359742 became leader at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.731929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c3d3313e1e359742 elected leader c3d3313e1e359742 at term 2"}
	{"level":"info","ts":"2024-06-03T12:12:43.735504Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c3d3313e1e359742","local-member-attributes":"{Name:embed-certs-725022 ClientURLs:[https://192.168.72.245:2379]}","request-path":"/0/members/c3d3313e1e359742/attributes","cluster-id":"3b0c78e7fea9a901","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:12:43.735657Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:12:43.735808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:12:43.740008Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.749459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.245:2379"}
	{"level":"info","ts":"2024-06-03T12:12:43.749886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b0c78e7fea9a901","local-member-id":"c3d3313e1e359742","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.750003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.750047Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:12:43.752768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:12:43.754742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:12:43.755608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:22:44.316972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":711}
	{"level":"info","ts":"2024-06-03T12:22:44.326421Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":711,"took":"9.072449ms","hash":101661586,"current-db-size-bytes":2150400,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2150400,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-03T12:22:44.326497Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":101661586,"revision":711,"compact-revision":-1}
	
	
	==> kernel <==
	 12:27:26 up 20 min,  0 users,  load average: 0.20, 0.25, 0.18
	Linux embed-certs-725022 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3cce6b24a5e40947bc39fbf2b9781d4c8694a66b6cd4b7c6d487e65dc24aff6a] <==
	I0603 12:20:46.643686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:22:45.646111       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:22:45.646209       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 12:22:46.646856       1 handler_proxy.go:93] no RequestInfo found in the context
	W0603 12:22:46.646886       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:22:46.647156       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:22:46.647167       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0603 12:22:46.647055       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:22:46.648414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:23:46.647493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:23:46.647776       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:23:46.647809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:23:46.649013       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:23:46.649071       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:23:46.649099       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:25:46.648459       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:25:46.648917       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 12:25:46.648953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 12:25:46.649543       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 12:25:46.649569       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 12:25:46.650780       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a4234747916814afaa8ee7a7a63a5ae355d6e907cffb138f828ed0676d9e7ce] <==
	I0603 12:21:31.919677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:22:01.433331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:22:01.929473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:22:31.439854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:22:31.938466       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:23:01.445363       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:01.946869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:23:31.452086       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:23:31.956032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 12:23:47.536179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="205.686µs"
	I0603 12:24:00.539311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.785µs"
	E0603 12:24:01.457229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:01.963807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:24:31.462814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:24:31.977970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:01.467977       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:01.985880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:25:31.473200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:25:31.994890       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:01.479309       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:02.005393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:26:31.484569       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:26:32.015645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 12:27:01.490433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 12:27:02.025281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [60795f3f2672bcb7b61e9b0e595d76ca8b666340913395131c61f92125dcf8d7] <==
	I0603 12:13:02.955916       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:13:02.986572       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.245"]
	I0603 12:13:03.096979       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:13:03.097032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:13:03.097049       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:13:03.102960       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:13:03.103154       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:13:03.103167       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:13:03.104382       1 config.go:192] "Starting service config controller"
	I0603 12:13:03.104401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:13:03.104433       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:13:03.104436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:13:03.106384       1 config.go:319] "Starting node config controller"
	I0603 12:13:03.106394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:13:03.205861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:13:03.205927       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:13:03.207318       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef7d5692a59fa66dd7a2449ce50a23eb01aaaf7a99529a20323360c0ed999b68] <==
	W0603 12:12:45.652953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:45.653187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.516064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:12:46.516184       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:12:46.546322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:12:46.546665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:12:46.552971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:12:46.553228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:12:46.631556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:12:46.631874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:12:46.729881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:12:46.729930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:12:46.736301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:12:46.736354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:12:46.751279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:12:46.751405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:12:46.850466       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:46.851112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.914993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:12:46.915236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:12:46.920558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:12:46.920678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:12:47.087056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:12:47.087119       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:12:49.447827       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:24:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:24:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:24:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:24:59 embed-certs-725022 kubelet[3936]: E0603 12:24:59.522527    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:25:10 embed-certs-725022 kubelet[3936]: E0603 12:25:10.521555    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:25:25 embed-certs-725022 kubelet[3936]: E0603 12:25:25.522642    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:25:37 embed-certs-725022 kubelet[3936]: E0603 12:25:37.521749    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:25:48 embed-certs-725022 kubelet[3936]: E0603 12:25:48.546072    3936 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:25:48 embed-certs-725022 kubelet[3936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:25:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:25:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:25:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:25:51 embed-certs-725022 kubelet[3936]: E0603 12:25:51.521592    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:26:05 embed-certs-725022 kubelet[3936]: E0603 12:26:05.522023    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:26:17 embed-certs-725022 kubelet[3936]: E0603 12:26:17.521543    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:26:28 embed-certs-725022 kubelet[3936]: E0603 12:26:28.522913    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:26:40 embed-certs-725022 kubelet[3936]: E0603 12:26:40.522530    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:26:48 embed-certs-725022 kubelet[3936]: E0603 12:26:48.547211    3936 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:26:48 embed-certs-725022 kubelet[3936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:26:48 embed-certs-725022 kubelet[3936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:26:48 embed-certs-725022 kubelet[3936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:26:48 embed-certs-725022 kubelet[3936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:26:54 embed-certs-725022 kubelet[3936]: E0603 12:26:54.522283    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:27:09 embed-certs-725022 kubelet[3936]: E0603 12:27:09.521931    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	Jun 03 12:27:24 embed-certs-725022 kubelet[3936]: E0603 12:27:24.527420    3936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jgmbs" podUID="148d8ece-e094-4df9-989a-1bc59a33b7ca"
	
	
	==> storage-provisioner [81efa28c7c7dd2a51a3f9a51e1a522ccf7a05e1e1baf0cea4ab447dce79f38bf] <==
	I0603 12:13:04.066369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:13:04.094510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:13:04.094604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:13:04.112396       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:13:04.112935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6!
	I0603 12:13:04.113150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5033375e-f80d-4568-bcf5-5027938c3121", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6 became leader
	I0603 12:13:04.213489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-725022_ac8ea1fe-0ae5-4f31-b8b0-7d9ff5347de6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725022 -n embed-certs-725022
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-725022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jgmbs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs: exit status 1 (66.768162ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jgmbs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-725022 describe pod metrics-server-569cc877fc-jgmbs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (316.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (146.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:25:19.213244   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:25:38.324347   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/flannel-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:26:32.014767   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
E0603 12:26:59.128854   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.155:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.155:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (241.28418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-905554" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-905554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-905554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.941µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-905554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (230.270734ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-905554 logs -n 25: (1.540382863s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-034991 sudo cat                              | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo                                  | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo find                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-034991 sudo crio                             | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-034991                                       | calico-034991                | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-231568 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:58 UTC |
	|         | disable-driver-mounts-231568                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:58 UTC | 03 Jun 24 11:59 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-725022            | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-602118             | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-196710  | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC | 03 Jun 24 11:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 11:59 UTC |                     |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-905554        | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-725022                 | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-725022                                  | embed-certs-725022           | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC | 03 Jun 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-602118                  | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-602118                                   | no-preload-602118            | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-196710       | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-196710 | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:12 UTC |
	|         | default-k8s-diff-port-196710                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-905554             | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:02 UTC | 03 Jun 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-905554                              | old-k8s-version-905554       | jenkins | v1.33.1 | 03 Jun 24 12:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:03:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:03:00.091233   73662 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:03:00.091511   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091522   73662 out.go:304] Setting ErrFile to fd 2...
	I0603 12:03:00.091533   73662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:03:00.091747   73662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 12:03:00.092302   73662 out.go:298] Setting JSON to false
	I0603 12:03:00.093203   73662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6325,"bootTime":1717409855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:03:00.093258   73662 start.go:139] virtualization: kvm guest
	I0603 12:03:00.095496   73662 out.go:177] * [old-k8s-version-905554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:03:00.097136   73662 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 12:03:00.097143   73662 notify.go:220] Checking for updates...
	I0603 12:03:00.098729   73662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:03:00.100123   73662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:03:00.101401   73662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 12:03:00.102776   73662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:03:00.104123   73662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:03:00.105823   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:03:00.106265   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.106313   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.120941   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0603 12:03:00.121275   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.121783   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.121807   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.122090   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.122253   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.124037   73662 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 12:03:00.125329   73662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:03:00.125608   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:03:00.125644   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:03:00.139840   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0603 12:03:00.140215   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:03:00.140599   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:03:00.140623   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:03:00.140906   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:03:00.141069   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:03:00.174375   73662 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:03:00.175650   73662 start.go:297] selected driver: kvm2
	I0603 12:03:00.175667   73662 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.175770   73662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:03:00.176396   73662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.176476   73662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:03:00.191380   73662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:03:00.191738   73662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:03:00.191796   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:03:00.191809   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:03:00.191847   73662 start.go:340] cluster config:
	{Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:03:00.191975   73662 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:03:00.193899   73662 out.go:177] * Starting "old-k8s-version-905554" primary control-plane node in "old-k8s-version-905554" cluster
	I0603 12:03:04.175308   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:00.195191   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:03:00.195231   73662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:03:00.195240   73662 cache.go:56] Caching tarball of preloaded images
	I0603 12:03:00.195331   73662 preload.go:173] Found /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:03:00.195345   73662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:03:00.195441   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:03:00.195620   73662 start.go:360] acquireMachinesLock for old-k8s-version-905554: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:03:07.247321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:13.327307   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:16.399349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:22.479291   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:25.551304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:31.631290   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:34.703297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:40.783313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:43.855312   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:49.935253   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:53.007321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:03:59.087310   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:02.159408   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:08.239374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:11.311346   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:17.391313   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:20.463280   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:26.543359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:29.615273   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:35.695325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:38.767328   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:44.847321   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:47.919323   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:53.999275   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:04:57.071278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:03.151359   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:06.223409   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:12.303278   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:15.375349   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:21.455288   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:24.527374   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:30.607297   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:33.679325   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:39.759247   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:42.831304   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:48.911327   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:51.983403   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:05:58.063364   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:01.135268   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:07.215311   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:10.287358   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:16.367324   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:19.439350   72964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.245:22: connect: no route to host
	I0603 12:06:22.443361   73179 start.go:364] duration metric: took 4m16.965076383s to acquireMachinesLock for "no-preload-602118"
	I0603 12:06:22.443417   73179 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:22.443423   73179 fix.go:54] fixHost starting: 
	I0603 12:06:22.443783   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:22.443812   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:22.458838   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0603 12:06:22.459247   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:22.459645   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:06:22.459662   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:22.459991   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:22.460181   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:22.460333   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:06:22.461743   73179 fix.go:112] recreateIfNeeded on no-preload-602118: state=Stopped err=<nil>
	I0603 12:06:22.461765   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	W0603 12:06:22.461946   73179 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:22.463492   73179 out.go:177] * Restarting existing kvm2 VM for "no-preload-602118" ...
	I0603 12:06:22.440994   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:22.441029   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441366   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:06:22.441382   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:06:22.441594   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:06:22.443211   72964 machine.go:97] duration metric: took 4m37.428820472s to provisionDockerMachine
	I0603 12:06:22.443252   72964 fix.go:56] duration metric: took 4m37.449227063s for fixHost
	I0603 12:06:22.443258   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 4m37.449246886s
	W0603 12:06:22.443279   72964 start.go:713] error starting host: provision: host is not running
	W0603 12:06:22.443377   72964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 12:06:22.443391   72964 start.go:728] Will try again in 5 seconds ...
	I0603 12:06:22.464734   73179 main.go:141] libmachine: (no-preload-602118) Calling .Start
	I0603 12:06:22.464901   73179 main.go:141] libmachine: (no-preload-602118) Ensuring networks are active...
	I0603 12:06:22.465632   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network default is active
	I0603 12:06:22.465908   73179 main.go:141] libmachine: (no-preload-602118) Ensuring network mk-no-preload-602118 is active
	I0603 12:06:22.466273   73179 main.go:141] libmachine: (no-preload-602118) Getting domain xml...
	I0603 12:06:22.466923   73179 main.go:141] libmachine: (no-preload-602118) Creating domain...
	I0603 12:06:23.644255   73179 main.go:141] libmachine: (no-preload-602118) Waiting to get IP...
	I0603 12:06:23.645290   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.645661   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.645846   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.645673   74346 retry.go:31] will retry after 270.126449ms: waiting for machine to come up
	I0603 12:06:23.917313   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:23.917691   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:23.917724   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:23.917635   74346 retry.go:31] will retry after 385.827167ms: waiting for machine to come up
	I0603 12:06:24.305342   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.305787   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.305809   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.305756   74346 retry.go:31] will retry after 361.435978ms: waiting for machine to come up
	I0603 12:06:24.669132   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:24.669489   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:24.669510   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:24.669460   74346 retry.go:31] will retry after 420.041485ms: waiting for machine to come up
	I0603 12:06:25.090925   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.091348   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.091378   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.091293   74346 retry.go:31] will retry after 624.215107ms: waiting for machine to come up
	I0603 12:06:27.445060   72964 start.go:360] acquireMachinesLock for embed-certs-725022: {Name:mk7389fd163f37e28f7d4d842bab151f7e27bc7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:06:25.717004   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:25.717428   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:25.717459   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:25.717376   74346 retry.go:31] will retry after 589.80788ms: waiting for machine to come up
	I0603 12:06:26.309117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:26.309553   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:26.309573   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:26.309525   74346 retry.go:31] will retry after 1.045937243s: waiting for machine to come up
	I0603 12:06:27.356628   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:27.357021   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:27.357091   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:27.357005   74346 retry.go:31] will retry after 1.111448638s: waiting for machine to come up
	I0603 12:06:28.469530   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:28.469988   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:28.470019   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:28.469937   74346 retry.go:31] will retry after 1.80245369s: waiting for machine to come up
	I0603 12:06:30.274889   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:30.275389   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:30.275422   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:30.275339   74346 retry.go:31] will retry after 1.896022361s: waiting for machine to come up
	I0603 12:06:32.173697   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:32.174116   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:32.174147   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:32.174065   74346 retry.go:31] will retry after 2.13920116s: waiting for machine to come up
	I0603 12:06:34.315196   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:34.315598   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:34.315629   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:34.315556   74346 retry.go:31] will retry after 3.168755933s: waiting for machine to come up
	I0603 12:06:37.485424   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:37.485804   73179 main.go:141] libmachine: (no-preload-602118) DBG | unable to find current IP address of domain no-preload-602118 in network mk-no-preload-602118
	I0603 12:06:37.485840   73179 main.go:141] libmachine: (no-preload-602118) DBG | I0603 12:06:37.485767   74346 retry.go:31] will retry after 3.278336467s: waiting for machine to come up
	I0603 12:06:42.080144   73294 start.go:364] duration metric: took 4m27.397961658s to acquireMachinesLock for "default-k8s-diff-port-196710"
	I0603 12:06:42.080213   73294 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:06:42.080220   73294 fix.go:54] fixHost starting: 
	I0603 12:06:42.080611   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:06:42.080640   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:06:42.096874   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0603 12:06:42.097286   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:06:42.097763   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:06:42.097789   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:06:42.098191   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:06:42.098383   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:06:42.098513   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:06:42.099866   73294 fix.go:112] recreateIfNeeded on default-k8s-diff-port-196710: state=Stopped err=<nil>
	I0603 12:06:42.099890   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	W0603 12:06:42.100034   73294 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:06:42.102388   73294 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-196710" ...
	I0603 12:06:40.768113   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.768689   73179 main.go:141] libmachine: (no-preload-602118) Found IP for machine: 192.168.50.245
	I0603 12:06:40.768705   73179 main.go:141] libmachine: (no-preload-602118) Reserving static IP address...
	I0603 12:06:40.768717   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has current primary IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.769262   73179 main.go:141] libmachine: (no-preload-602118) Reserved static IP address: 192.168.50.245
	I0603 12:06:40.769291   73179 main.go:141] libmachine: (no-preload-602118) Waiting for SSH to be available...
	I0603 12:06:40.769306   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.769324   73179 main.go:141] libmachine: (no-preload-602118) DBG | skip adding static IP to network mk-no-preload-602118 - found existing host DHCP lease matching {name: "no-preload-602118", mac: "52:54:00:ac:6c:91", ip: "192.168.50.245"}
	I0603 12:06:40.769336   73179 main.go:141] libmachine: (no-preload-602118) DBG | Getting to WaitForSSH function...
	I0603 12:06:40.771708   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772029   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.772056   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.772179   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH client type: external
	I0603 12:06:40.772203   73179 main.go:141] libmachine: (no-preload-602118) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa (-rw-------)
	I0603 12:06:40.772247   73179 main.go:141] libmachine: (no-preload-602118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:06:40.772276   73179 main.go:141] libmachine: (no-preload-602118) DBG | About to run SSH command:
	I0603 12:06:40.772292   73179 main.go:141] libmachine: (no-preload-602118) DBG | exit 0
	I0603 12:06:40.898941   73179 main.go:141] libmachine: (no-preload-602118) DBG | SSH cmd err, output: <nil>: 
	I0603 12:06:40.899308   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetConfigRaw
	I0603 12:06:40.899900   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:40.902486   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.902835   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.902863   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.903133   73179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/config.json ...
	I0603 12:06:40.903331   73179 machine.go:94] provisionDockerMachine start ...
	I0603 12:06:40.903348   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:40.903530   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:40.905503   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905783   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:40.905816   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:40.905911   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:40.906094   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906253   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:40.906416   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:40.906579   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:40.906760   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:40.906771   73179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:06:41.015416   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:06:41.015443   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.015832   73179 buildroot.go:166] provisioning hostname "no-preload-602118"
	I0603 12:06:41.015861   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.016078   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.018606   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.018898   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.018928   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.019125   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.019310   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019476   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.019597   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.019753   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.019948   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.019961   73179 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-602118 && echo "no-preload-602118" | sudo tee /etc/hostname
	I0603 12:06:41.145267   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-602118
	
	I0603 12:06:41.145298   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.148117   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148416   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.148444   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.148692   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.148914   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149068   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.149199   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.149316   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.149475   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.149490   73179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-602118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-602118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-602118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:06:41.267803   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:06:41.267841   73179 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:06:41.267859   73179 buildroot.go:174] setting up certificates
	I0603 12:06:41.267869   73179 provision.go:84] configureAuth start
	I0603 12:06:41.267877   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetMachineName
	I0603 12:06:41.268155   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:41.270862   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271249   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.271271   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.271414   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.273376   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273689   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.273715   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.273831   73179 provision.go:143] copyHostCerts
	I0603 12:06:41.273907   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:06:41.273926   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:06:41.274002   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:06:41.274128   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:06:41.274138   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:06:41.274173   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:06:41.274248   73179 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:06:41.274259   73179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:06:41.274296   73179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:06:41.274369   73179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.no-preload-602118 san=[127.0.0.1 192.168.50.245 localhost minikube no-preload-602118]
	I0603 12:06:41.377976   73179 provision.go:177] copyRemoteCerts
	I0603 12:06:41.378029   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:06:41.378053   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.380502   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.380818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.380839   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.381002   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.381171   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.381345   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.381462   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.465434   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:06:41.492636   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:06:41.516229   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:06:41.538729   73179 provision.go:87] duration metric: took 270.850705ms to configureAuth
	I0603 12:06:41.538751   73179 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:06:41.538913   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:06:41.538998   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.541514   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541818   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.541843   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.541966   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.542166   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542350   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.542483   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.542666   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.542809   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.542823   73179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:06:41.837735   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:06:41.837766   73179 machine.go:97] duration metric: took 934.421104ms to provisionDockerMachine
	I0603 12:06:41.837780   73179 start.go:293] postStartSetup for "no-preload-602118" (driver="kvm2")
	I0603 12:06:41.837791   73179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:06:41.837808   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:41.838173   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:06:41.838200   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.840498   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840832   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.840861   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.840990   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.841179   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.841351   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.841473   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:41.926168   73179 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:06:41.930420   73179 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:06:41.930450   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:06:41.930509   73179 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:06:41.930583   73179 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:06:41.930661   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:06:41.940412   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:41.963912   73179 start.go:296] duration metric: took 126.115944ms for postStartSetup
	I0603 12:06:41.963949   73179 fix.go:56] duration metric: took 19.520525784s for fixHost
	I0603 12:06:41.963991   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:41.966591   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.966928   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:41.966946   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:41.967081   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:41.967272   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967423   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:41.967608   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:41.967762   73179 main.go:141] libmachine: Using SSH client type: native
	I0603 12:06:41.967918   73179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0603 12:06:41.967927   73179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:06:42.079982   73179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416402.057236225
	
	I0603 12:06:42.080009   73179 fix.go:216] guest clock: 1717416402.057236225
	I0603 12:06:42.080015   73179 fix.go:229] Guest: 2024-06-03 12:06:42.057236225 +0000 UTC Remote: 2024-06-03 12:06:41.963952729 +0000 UTC m=+276.629989589 (delta=93.283496ms)
	I0603 12:06:42.080041   73179 fix.go:200] guest clock delta is within tolerance: 93.283496ms
	I0603 12:06:42.080045   73179 start.go:83] releasing machines lock for "no-preload-602118", held for 19.636648914s
	I0603 12:06:42.080070   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.080311   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:42.083162   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083519   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.083544   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.083733   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084238   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084405   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:06:42.084458   73179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:06:42.084528   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.084607   73179 ssh_runner.go:195] Run: cat /version.json
	I0603 12:06:42.084632   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:06:42.087630   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087927   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.087958   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.087981   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088083   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088261   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088441   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.088463   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:42.088507   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:42.088592   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.088666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:06:42.088800   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:06:42.088961   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:06:42.089101   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:06:42.192400   73179 ssh_runner.go:195] Run: systemctl --version
	I0603 12:06:42.198773   73179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:06:42.345931   73179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:06:42.351818   73179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:06:42.351877   73179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:06:42.368582   73179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:06:42.368609   73179 start.go:494] detecting cgroup driver to use...
	I0603 12:06:42.368680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:06:42.384411   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:06:42.398006   73179 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:06:42.398052   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:06:42.412680   73179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:06:42.427157   73179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:06:42.537162   73179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:06:42.683438   73179 docker.go:233] disabling docker service ...
	I0603 12:06:42.683505   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:06:42.697969   73179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:06:42.711164   73179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:06:42.835194   73179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:06:42.947116   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:06:42.961398   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:06:42.980179   73179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:06:42.980227   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:42.990583   73179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:06:42.990642   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.001031   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.012124   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.023143   73179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:06:43.034535   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.045854   73179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.063071   73179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:06:43.074257   73179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:06:43.083914   73179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:06:43.083965   73179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:06:43.098285   73179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:06:43.108034   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:43.219068   73179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:06:43.376591   73179 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:06:43.376655   73179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:06:43.381868   73179 start.go:562] Will wait 60s for crictl version
	I0603 12:06:43.381939   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.385730   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:06:43.423331   73179 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:06:43.423428   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.450760   73179 ssh_runner.go:195] Run: crio --version
	I0603 12:06:43.479690   73179 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:06:42.103653   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Start
	I0603 12:06:42.103818   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring networks are active...
	I0603 12:06:42.104660   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network default is active
	I0603 12:06:42.104985   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Ensuring network mk-default-k8s-diff-port-196710 is active
	I0603 12:06:42.105332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Getting domain xml...
	I0603 12:06:42.106264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Creating domain...
	I0603 12:06:43.347118   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting to get IP...
	I0603 12:06:43.347855   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348279   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.348337   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.348249   74483 retry.go:31] will retry after 307.61274ms: waiting for machine to come up
	I0603 12:06:43.657720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658162   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:43.658188   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:43.658129   74483 retry.go:31] will retry after 387.079794ms: waiting for machine to come up
	I0603 12:06:44.046798   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047345   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.047376   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.047279   74483 retry.go:31] will retry after 482.224139ms: waiting for machine to come up
	I0603 12:06:44.531107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531588   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.531615   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.531542   74483 retry.go:31] will retry after 438.288195ms: waiting for machine to come up
	I0603 12:06:43.481020   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetIP
	I0603 12:06:43.483887   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484297   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:06:43.484324   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:06:43.484533   73179 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 12:06:43.488769   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:43.501433   73179 kubeadm.go:877] updating cluster {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:06:43.501583   73179 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:06:43.501644   73179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:06:43.537382   73179 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:06:43.537407   73179 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:06:43.537504   73179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.537484   73179 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.537597   73179 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 12:06:43.537483   73179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.537618   73179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.537612   73179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.537771   73179 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.539200   73179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.539491   73179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:43.539504   73179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.539530   73179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.539565   73179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:43.539472   73179 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 12:06:43.539934   73179 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.694144   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.714990   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.720270   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.734481   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.751928   73179 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 12:06:43.751970   73179 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.752018   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.780362   73179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 12:06:43.780408   73179 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.780455   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.798376   73179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 12:06:43.798415   73179 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.798465   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.801422   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 12:06:43.811338   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:43.823969   73179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 12:06:43.824052   73179 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:43.823979   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 12:06:43.824096   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:43.824106   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:06:43.824088   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:06:43.861957   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001291   73179 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 12:06:44.001312   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 12:06:44.001344   73179 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.001390   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.001454   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:06:44.001472   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.001544   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:44.001405   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 12:06:44.001520   73179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 12:06:44.001622   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:44.001627   73179 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.001685   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:44.014801   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 12:06:44.014820   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.014858   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 12:06:44.049018   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 12:06:44.049106   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 12:06:44.049138   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:06:44.049149   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:06:44.049193   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:44.049202   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 12:06:44.414960   73179 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:44.971603   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.971986   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:44.972027   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:44.971941   74483 retry.go:31] will retry after 696.415219ms: waiting for machine to come up
	I0603 12:06:45.669711   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670032   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:45.670064   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:45.670011   74483 retry.go:31] will retry after 706.751938ms: waiting for machine to come up
	I0603 12:06:46.378097   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378510   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:46.378552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:46.378484   74483 retry.go:31] will retry after 1.039219665s: waiting for machine to come up
	I0603 12:06:47.419138   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419573   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:47.419601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:47.419520   74483 retry.go:31] will retry after 1.138110516s: waiting for machine to come up
	I0603 12:06:48.559728   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560297   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:48.560320   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:48.560259   74483 retry.go:31] will retry after 1.175521014s: waiting for machine to come up
	I0603 12:06:46.011238   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.996357708s)
	I0603 12:06:46.011274   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 12:06:46.011313   73179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011322   73179 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (1.96210268s)
	I0603 12:06:46.011332   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1: (1.962169544s)
	I0603 12:06:46.011353   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 12:06:46.011367   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 12:06:46.011386   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 12:06:46.011397   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (1.962226902s)
	I0603 12:06:46.011424   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 12:06:46.011426   73179 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.596439345s)
	I0603 12:06:46.011451   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:46.011474   73179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 12:06:46.011483   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:46.011508   73179 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.011545   73179 ssh_runner.go:195] Run: which crictl
	I0603 12:06:46.020596   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 12:06:46.020599   73179 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:06:46.020749   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 12:06:49.747952   73179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.727320079s)
	I0603 12:06:49.748008   73179 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 12:06:49.748024   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.736616522s)
	I0603 12:06:49.748048   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 12:06:49.748074   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.748108   73179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:49.748120   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 12:06:49.753125   73179 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 12:06:49.737515   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:49.738036   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:49.737954   74483 retry.go:31] will retry after 2.132134762s: waiting for machine to come up
	I0603 12:06:51.872423   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872917   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:51.872945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:51.872857   74483 retry.go:31] will retry after 2.778528878s: waiting for machine to come up
	I0603 12:06:52.416845   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.668695263s)
	I0603 12:06:52.416881   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 12:06:52.416909   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:52.417012   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 12:06:54.588430   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.171386022s)
	I0603 12:06:54.588455   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 12:06:54.588480   73179 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.588528   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 12:06:54.653098   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653566   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:54.653596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:54.653504   74483 retry.go:31] will retry after 2.88020763s: waiting for machine to come up
	I0603 12:06:57.535688   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536303   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | unable to find current IP address of domain default-k8s-diff-port-196710 in network mk-default-k8s-diff-port-196710
	I0603 12:06:57.536331   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | I0603 12:06:57.536246   74483 retry.go:31] will retry after 4.007108619s: waiting for machine to come up
	I0603 12:06:55.946565   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.358013442s)
	I0603 12:06:55.946595   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 12:06:55.946618   73179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:55.946654   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 12:06:57.739662   73179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.792982594s)
	I0603 12:06:57.739693   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 12:06:57.739720   73179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:57.739766   73179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 12:06:58.592007   73179 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 12:06:58.592049   73179 cache_images.go:123] Successfully loaded all cached images
	I0603 12:06:58.592075   73179 cache_images.go:92] duration metric: took 15.054652125s to LoadCachedImages
	I0603 12:06:58.592096   73179 kubeadm.go:928] updating node { 192.168.50.245 8443 v1.30.1 crio true true} ...
	I0603 12:06:58.592210   73179 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-602118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:06:58.592278   73179 ssh_runner.go:195] Run: crio config
	I0603 12:06:58.637533   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:06:58.637561   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:06:58.637583   73179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:06:58.637620   73179 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-602118 NodeName:no-preload-602118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:06:58.637822   73179 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-602118"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:06:58.637918   73179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:06:58.649096   73179 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:06:58.649150   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:06:58.658815   73179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 12:06:58.675538   73179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:06:58.692443   73179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 12:06:58.709416   73179 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:06:58.713241   73179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:06:58.725522   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:06:58.846624   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:06:58.864101   73179 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118 for IP: 192.168.50.245
	I0603 12:06:58.864129   73179 certs.go:194] generating shared ca certs ...
	I0603 12:06:58.864149   73179 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:06:58.864311   73179 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:06:58.864362   73179 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:06:58.864376   73179 certs.go:256] generating profile certs ...
	I0603 12:06:58.864473   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/client.key
	I0603 12:06:58.864551   73179 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key.eef28f92
	I0603 12:06:58.864602   73179 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key
	I0603 12:06:58.864744   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:06:58.864786   73179 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:06:58.864800   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:06:58.864836   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:06:58.864869   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:06:58.864900   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:06:58.865039   73179 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:06:58.865705   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:06:58.898291   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:06:58.923481   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:06:58.955249   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:06:58.986524   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:06:59.037456   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:06:59.061989   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:06:59.085738   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/no-preload-602118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:06:59.109202   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:06:59.132149   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:06:59.154957   73179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:06:59.177797   73179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:06:59.194816   73179 ssh_runner.go:195] Run: openssl version
	I0603 12:06:59.200714   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:06:59.211392   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215900   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.215950   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:06:59.221796   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:06:59.232655   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:06:59.243679   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248120   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.248168   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:06:59.253816   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:06:59.264416   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:06:59.275143   73179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279393   73179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.279431   73179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:06:59.285269   73179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:06:59.295789   73179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:06:59.300138   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:06:59.305722   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:06:59.311381   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:06:59.317037   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:06:59.322539   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:06:59.328067   73179 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:06:59.333575   73179 kubeadm.go:391] StartCluster: {Name:no-preload-602118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-602118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:06:59.333659   73179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:06:59.333712   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.374413   73179 cri.go:89] found id: ""
	I0603 12:06:59.374471   73179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:06:59.384802   73179 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:06:59.384819   73179 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:06:59.384832   73179 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:06:59.384878   73179 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:06:59.394669   73179 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:06:59.395564   73179 kubeconfig.go:125] found "no-preload-602118" server: "https://192.168.50.245:8443"
	I0603 12:06:59.397420   73179 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:06:59.407251   73179 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.245
	I0603 12:06:59.407281   73179 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:06:59.407295   73179 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:06:59.407347   73179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:06:59.452986   73179 cri.go:89] found id: ""
	I0603 12:06:59.453067   73179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:06:59.470164   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:06:59.480228   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:06:59.480249   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:06:59.480291   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:06:59.489923   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:06:59.489968   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:06:59.499530   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:06:59.508336   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:06:59.508376   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:06:59.517665   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.526660   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:06:59.526697   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:06:59.535973   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:06:59.544846   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:06:59.544885   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:06:59.554342   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:06:59.563632   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:06:59.673234   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:02.883984   73662 start.go:364] duration metric: took 4m2.688332749s to acquireMachinesLock for "old-k8s-version-905554"
	I0603 12:07:02.884045   73662 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:02.884052   73662 fix.go:54] fixHost starting: 
	I0603 12:07:02.884482   73662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:02.884520   73662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:02.905120   73662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0603 12:07:02.905571   73662 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:02.906128   73662 main.go:141] libmachine: Using API Version  1
	I0603 12:07:02.906157   73662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:02.906519   73662 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:02.906709   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:02.906852   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetState
	I0603 12:07:02.908371   73662 fix.go:112] recreateIfNeeded on old-k8s-version-905554: state=Stopped err=<nil>
	I0603 12:07:02.908412   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	W0603 12:07:02.908577   73662 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:02.910440   73662 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-905554" ...
	I0603 12:07:01.548241   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.548698   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Found IP for machine: 192.168.61.60
	I0603 12:07:01.548720   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserving static IP address...
	I0603 12:07:01.548734   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has current primary IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.549093   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.549127   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | skip adding static IP to network mk-default-k8s-diff-port-196710 - found existing host DHCP lease matching {name: "default-k8s-diff-port-196710", mac: "52:54:00:9c:61:49", ip: "192.168.61.60"}
	I0603 12:07:01.549141   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Reserved static IP address: 192.168.61.60
	I0603 12:07:01.549161   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Getting to WaitForSSH function...
	I0603 12:07:01.549171   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Waiting for SSH to be available...
	I0603 12:07:01.551680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.551959   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.551996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.552051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH client type: external
	I0603 12:07:01.552124   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa (-rw-------)
	I0603 12:07:01.552160   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:01.552181   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | About to run SSH command:
	I0603 12:07:01.552194   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | exit 0
	I0603 12:07:01.674944   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:01.675373   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetConfigRaw
	I0603 12:07:01.676105   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:01.678480   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.678823   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.678854   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.679088   73294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/config.json ...
	I0603 12:07:01.679311   73294 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:01.679332   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:01.679525   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.681641   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.681931   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.681964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.682121   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.682291   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682466   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.682611   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.682753   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.682949   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.682962   73294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:01.787146   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:01.787176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787425   73294 buildroot.go:166] provisioning hostname "default-k8s-diff-port-196710"
	I0603 12:07:01.787448   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:01.787638   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.790151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790487   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.790512   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.790646   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.790812   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.790964   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.791133   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.791272   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.791477   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.791496   73294 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-196710 && echo "default-k8s-diff-port-196710" | sudo tee /etc/hostname
	I0603 12:07:01.916785   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-196710
	
	I0603 12:07:01.916820   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:01.919809   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920225   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:01.920264   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:01.920552   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:01.920756   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.920947   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:01.921145   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:01.921363   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:01.921645   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:01.921671   73294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-196710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-196710/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-196710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:02.048767   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:02.048797   73294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:02.048851   73294 buildroot.go:174] setting up certificates
	I0603 12:07:02.048866   73294 provision.go:84] configureAuth start
	I0603 12:07:02.048883   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetMachineName
	I0603 12:07:02.049168   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.051709   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.052151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.052295   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.054716   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055073   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.055106   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.055262   73294 provision.go:143] copyHostCerts
	I0603 12:07:02.055334   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:02.055349   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:02.055408   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:02.055527   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:02.055539   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:02.055568   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:02.055648   73294 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:02.055659   73294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:02.055684   73294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:02.055753   73294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-196710 san=[127.0.0.1 192.168.61.60 default-k8s-diff-port-196710 localhost minikube]
	I0603 12:07:02.172134   73294 provision.go:177] copyRemoteCerts
	I0603 12:07:02.172192   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:02.172217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.175333   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175724   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.175749   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.175996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.176203   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.176405   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.176599   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.273410   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:02.302337   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 12:07:02.326471   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:02.350709   73294 provision.go:87] duration metric: took 301.827273ms to configureAuth
	I0603 12:07:02.350742   73294 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:02.350977   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:02.351086   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.354023   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354434   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.354465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.354613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.354813   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.354996   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.355176   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.355385   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.355603   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.355633   73294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:02.636420   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:02.636453   73294 machine.go:97] duration metric: took 957.127741ms to provisionDockerMachine
	I0603 12:07:02.636467   73294 start.go:293] postStartSetup for "default-k8s-diff-port-196710" (driver="kvm2")
	I0603 12:07:02.636480   73294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:02.636507   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.636828   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:02.636860   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.639699   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640122   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.640155   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.640282   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.640462   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.640647   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.640907   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.729745   73294 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:02.734393   73294 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:02.734414   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:02.734476   73294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:02.734545   73294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:02.734623   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:02.744239   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:02.770883   73294 start.go:296] duration metric: took 134.402064ms for postStartSetup
	I0603 12:07:02.770918   73294 fix.go:56] duration metric: took 20.69069756s for fixHost
	I0603 12:07:02.770940   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.773675   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.773977   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.774010   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.774111   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.774329   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774482   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.774635   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.774814   73294 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:02.774984   73294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.60 22 <nil> <nil>}
	I0603 12:07:02.774998   73294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:02.883831   73294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416422.860813739
	
	I0603 12:07:02.883859   73294 fix.go:216] guest clock: 1717416422.860813739
	I0603 12:07:02.883870   73294 fix.go:229] Guest: 2024-06-03 12:07:02.860813739 +0000 UTC Remote: 2024-06-03 12:07:02.770922212 +0000 UTC m=+288.221479764 (delta=89.891527ms)
	I0603 12:07:02.883896   73294 fix.go:200] guest clock delta is within tolerance: 89.891527ms
	I0603 12:07:02.883902   73294 start.go:83] releasing machines lock for "default-k8s-diff-port-196710", held for 20.803713434s
	I0603 12:07:02.883935   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.884217   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:02.887393   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.887789   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.887954   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888465   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888616   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:07:02.888698   73294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:02.888770   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.888871   73294 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:02.888913   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:07:02.891596   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.891957   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892009   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892051   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892250   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892422   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892436   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:02.892453   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:02.892601   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:07:02.892636   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.892758   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:07:02.892777   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.892941   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:07:02.893092   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:07:02.998124   73294 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:03.005653   73294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:03.152446   73294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:03.160607   73294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:03.160674   73294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:03.176490   73294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:03.176513   73294 start.go:494] detecting cgroup driver to use...
	I0603 12:07:03.176581   73294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:03.195427   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:03.211343   73294 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:03.211398   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:03.227943   73294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:03.245409   73294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:03.384124   73294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:03.529899   73294 docker.go:233] disabling docker service ...
	I0603 12:07:03.529984   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:03.545971   73294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:03.559981   73294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:03.726303   73294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:03.850915   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:03.865591   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:03.884498   73294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:03.884558   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.897708   73294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:03.897772   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.912146   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.926435   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.940520   73294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:03.955122   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.972518   73294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:03.997707   73294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:04.009020   73294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:04.024118   73294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:04.024185   73294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:04.043959   73294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:04.057417   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:04.195354   73294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:04.365103   73294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:04.365195   73294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:04.370764   73294 start.go:562] Will wait 60s for crictl version
	I0603 12:07:04.370822   73294 ssh_runner.go:195] Run: which crictl
	I0603 12:07:04.375203   73294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:04.430761   73294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:04.430843   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.471171   73294 ssh_runner.go:195] Run: crio --version
	I0603 12:07:04.506684   73294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:04.508144   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetIP
	I0603 12:07:04.510945   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511375   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:07:04.511406   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:07:04.511607   73294 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:04.516367   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:04.532203   73294 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:04.532326   73294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:04.532409   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:04.576446   73294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:04.576523   73294 ssh_runner.go:195] Run: which lz4
	I0603 12:07:04.580901   73294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:02.911700   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .Start
	I0603 12:07:02.911842   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring networks are active...
	I0603 12:07:02.912570   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network default is active
	I0603 12:07:02.912896   73662 main.go:141] libmachine: (old-k8s-version-905554) Ensuring network mk-old-k8s-version-905554 is active
	I0603 12:07:02.913324   73662 main.go:141] libmachine: (old-k8s-version-905554) Getting domain xml...
	I0603 12:07:02.914147   73662 main.go:141] libmachine: (old-k8s-version-905554) Creating domain...
	I0603 12:07:04.233691   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting to get IP...
	I0603 12:07:04.234800   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.235276   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.235378   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.235243   74674 retry.go:31] will retry after 297.546447ms: waiting for machine to come up
	I0603 12:07:04.534942   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.535492   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.535522   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.535456   74674 retry.go:31] will retry after 385.160833ms: waiting for machine to come up
	I0603 12:07:04.922824   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:04.923312   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:04.923336   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:04.923267   74674 retry.go:31] will retry after 363.309555ms: waiting for machine to come up
	I0603 12:07:01.017968   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.344700881s)
	I0603 12:07:01.017993   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.214414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.291063   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:01.420874   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:01.420977   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:01.921439   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.421904   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:02.445051   73179 api_server.go:72] duration metric: took 1.024176056s to wait for apiserver process to appear ...
	I0603 12:07:02.445083   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:02.445112   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:02.445614   73179 api_server.go:269] stopped: https://192.168.50.245:8443/healthz: Get "https://192.168.50.245:8443/healthz": dial tcp 192.168.50.245:8443: connect: connection refused
	I0603 12:07:02.945547   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.426682   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.426713   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.426726   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.474343   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:05.474380   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:05.474399   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.578473   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.578520   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:05.945708   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:05.952298   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:05.952338   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.445920   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.454769   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.454805   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:06.945370   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:06.952157   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:06.952193   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.445973   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.457436   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.457471   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:07.945237   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:07.952135   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:07.952168   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.445763   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.450319   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:08.450346   73179 api_server.go:103] status: https://192.168.50.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:08.945476   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:07:08.950139   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:07:08.956975   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:08.957002   73179 api_server.go:131] duration metric: took 6.511911305s to wait for apiserver health ...
	I0603 12:07:08.957012   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.957020   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.958965   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:04.585614   73294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:04.585642   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:06.088296   73294 crio.go:462] duration metric: took 1.507429412s to copy over tarball
	I0603 12:07:06.088376   73294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:08.432866   73294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344418631s)
	I0603 12:07:08.432898   73294 crio.go:469] duration metric: took 2.344572918s to extract the tarball
	I0603 12:07:08.432921   73294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:08.472509   73294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:08.529017   73294 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:08.529040   73294 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:08.529052   73294 kubeadm.go:928] updating node { 192.168.61.60 8444 v1.30.1 crio true true} ...
	I0603 12:07:08.529180   73294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-196710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:08.529244   73294 ssh_runner.go:195] Run: crio config
	I0603 12:07:08.581601   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:08.581625   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:08.581641   73294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:08.581667   73294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.60 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-196710 NodeName:default-k8s-diff-port-196710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:08.581854   73294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.60
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-196710"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:08.581931   73294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:08.595708   73294 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:08.595778   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:08.608914   73294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0603 12:07:08.627009   73294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:08.643755   73294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0603 12:07:08.661803   73294 ssh_runner.go:195] Run: grep 192.168.61.60	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:08.665764   73294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:08.678440   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:08.797052   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:08.814618   73294 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710 for IP: 192.168.61.60
	I0603 12:07:08.814645   73294 certs.go:194] generating shared ca certs ...
	I0603 12:07:08.814665   73294 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:08.814863   73294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:08.814931   73294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:08.814945   73294 certs.go:256] generating profile certs ...
	I0603 12:07:08.815072   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/client.key
	I0603 12:07:08.815150   73294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key.fd40708e
	I0603 12:07:08.815210   73294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key
	I0603 12:07:08.815370   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:08.815408   73294 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:08.815421   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:08.815467   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:08.815501   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:08.815529   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:08.815581   73294 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:08.816420   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:08.852241   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:08.892369   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:08.924242   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:08.952908   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:07:09.002060   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:09.035617   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:09.063304   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/default-k8s-diff-port-196710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:09.090994   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:09.122568   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:09.150432   73294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:09.178940   73294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:09.202491   73294 ssh_runner.go:195] Run: openssl version
	I0603 12:07:09.211182   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:09.226290   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232034   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.232103   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:09.240592   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:09.255018   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:09.267194   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272575   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.272658   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:09.280687   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:09.296232   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:09.309706   73294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315596   73294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.315661   73294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:09.323283   73294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:09.337780   73294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:09.343627   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:09.351742   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:09.360465   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:09.366733   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:09.373061   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:09.379649   73294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:09.385610   73294 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-196710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-196710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:09.385694   73294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:09.385732   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.434544   73294 cri.go:89] found id: ""
	I0603 12:07:09.434636   73294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:09.446209   73294 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:09.446231   73294 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:09.446236   73294 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:09.446283   73294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:09.456225   73294 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:09.457266   73294 kubeconfig.go:125] found "default-k8s-diff-port-196710" server: "https://192.168.61.60:8444"
	I0603 12:07:09.459519   73294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:09.468977   73294 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.60
	I0603 12:07:09.469007   73294 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:09.469020   73294 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:09.469070   73294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:09.508306   73294 cri.go:89] found id: ""
	I0603 12:07:09.508408   73294 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:09.526082   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:09.536331   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:09.536361   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:09.536430   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:07:09.549053   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:09.549121   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:09.562617   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:07:09.574968   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:09.575023   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:05.287726   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.288228   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.288264   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.288180   74674 retry.go:31] will retry after 401.575259ms: waiting for machine to come up
	I0603 12:07:05.691523   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:05.691945   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:05.691977   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:05.691899   74674 retry.go:31] will retry after 473.67071ms: waiting for machine to come up
	I0603 12:07:06.167720   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.168286   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.168317   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.168229   74674 retry.go:31] will retry after 610.631851ms: waiting for machine to come up
	I0603 12:07:06.780253   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:06.780747   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:06.780771   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:06.780699   74674 retry.go:31] will retry after 1.150068976s: waiting for machine to come up
	I0603 12:07:07.932831   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:07.933375   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:07.933409   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:07.933282   74674 retry.go:31] will retry after 900.546424ms: waiting for machine to come up
	I0603 12:07:08.835303   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:08.835794   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:08.835827   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:08.835739   74674 retry.go:31] will retry after 1.64990511s: waiting for machine to come up
	I0603 12:07:08.960402   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:08.971814   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:08.989522   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:09.001926   73179 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:09.001960   73179 system_pods.go:61] "coredns-7db6d8ff4d-pv665" [58d7a423-2ac7-4a57-a76f-e8dfaeac9732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:09.001975   73179 system_pods.go:61] "etcd-no-preload-602118" [3a6a1eb1-0234-47d8-8eaa-e6f2de5fc7b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:09.001987   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [d6b168b3-1605-4e04-8c6a-c5c22a080a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:09.001998   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [b045e945-f022-443d-b0f6-17f0b283f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:09.002010   73179 system_pods.go:61] "kube-proxy-r9fkt" [10eef751-51d7-4794-9805-26587a395a5b] Running
	I0603 12:07:09.002019   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [2032b4c9-ff95-4435-bbb2-ad6f87598555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:09.002030   73179 system_pods.go:61] "metrics-server-569cc877fc-jgjzt" [ac1aac82-0d34-47e1-b9c5-4f1f501c8bd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:09.002035   73179 system_pods.go:61] "storage-provisioner" [6d38abd9-e1e6-4e71-b96f-4653971b511f] Running
	I0603 12:07:09.002044   73179 system_pods.go:74] duration metric: took 12.497722ms to wait for pod list to return data ...
	I0603 12:07:09.002059   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:09.005347   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:09.005374   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:09.005394   73179 node_conditions.go:105] duration metric: took 3.3294ms to run NodePressure ...
	I0603 12:07:09.005414   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:09.274344   73179 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280021   73179 kubeadm.go:733] kubelet initialised
	I0603 12:07:09.280042   73179 kubeadm.go:734] duration metric: took 5.676641ms waiting for restarted kubelet to initialise ...
	I0603 12:07:09.280056   73179 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:09.285090   73179 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.290457   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290478   73179 pod_ready.go:81] duration metric: took 5.366255ms for pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.290487   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "coredns-7db6d8ff4d-pv665" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.290495   73179 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.296847   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296872   73179 pod_ready.go:81] duration metric: took 6.368777ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.296883   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "etcd-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.296895   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.300895   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300914   73179 pod_ready.go:81] duration metric: took 4.012614ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.300922   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-apiserver-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.300927   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.394237   73179 pod_ready.go:97] node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394267   73179 pod_ready.go:81] duration metric: took 93.331406ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:09.394280   73179 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-602118" hosting pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-602118" has status "Ready":"False"
	I0603 12:07:09.394289   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:09.585502   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.969462   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:09.969522   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:09.979025   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:07:09.987866   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:09.987920   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:09.997090   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:10.006350   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:10.214287   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.298009   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083680634s)
	I0603 12:07:11.298064   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.562011   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.680895   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:11.790078   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:11.790166   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.291115   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.790366   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:12.840813   73294 api_server.go:72] duration metric: took 1.050741427s to wait for apiserver process to appear ...
	I0603 12:07:12.840845   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:12.840869   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:12.841376   73294 api_server.go:269] stopped: https://192.168.61.60:8444/healthz: Get "https://192.168.61.60:8444/healthz": dial tcp 192.168.61.60:8444: connect: connection refused
	I0603 12:07:13.341000   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:10.487141   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:10.564570   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:10.564611   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:10.487617   74674 retry.go:31] will retry after 1.948227414s: waiting for machine to come up
	I0603 12:07:12.438091   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:12.438596   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:12.438620   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:12.438540   74674 retry.go:31] will retry after 2.378980516s: waiting for machine to come up
	I0603 12:07:14.819161   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:14.819782   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:14.819806   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:14.819722   74674 retry.go:31] will retry after 2.362614226s: waiting for machine to come up
	I0603 12:07:11.067879   73179 pod_ready.go:92] pod "kube-proxy-r9fkt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:11.067907   73179 pod_ready.go:81] duration metric: took 1.673607925s for pod "kube-proxy-r9fkt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:11.067922   73179 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:13.078490   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:15.451457   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.451491   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.451509   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.474239   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:15.474272   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:15.841786   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:15.846026   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:15.846051   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.341687   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.348062   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:16.348097   73294 api_server.go:103] status: https://192.168.61.60:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:16.841677   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:07:16.851931   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:07:16.861724   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:16.861752   73294 api_server.go:131] duration metric: took 4.020899633s to wait for apiserver health ...
	I0603 12:07:16.861762   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:07:16.861782   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:16.863553   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:16.864875   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:16.875581   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:16.895092   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:16.906573   73294 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:16.906609   73294 system_pods.go:61] "coredns-7db6d8ff4d-wrw9f" [0125eb3a-9a5a-4bb3-a175-0e49b4392d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:16.906621   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [2189cad5-b6e7-4cc5-9ce8-22ba18abce59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:16.906631   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [1aee234a-8876-4594-a0d6-7c7dfb7f4d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:16.906640   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [18029d80-921c-477c-a82f-26eb1a068b97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:16.906650   73294 system_pods.go:61] "kube-proxy-84l9f" [5568c7a8-5237-4240-a9dc-6436b156010c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:16.906673   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [9fafec03-b5fb-4ea4-98df-0798cd8a01a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:16.906681   73294 system_pods.go:61] "metrics-server-569cc877fc-tnhbj" [352fbe10-2f52-434e-91fc-84fbf439a42d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:16.906690   73294 system_pods.go:61] "storage-provisioner" [24c5e290-d3d7-4523-9432-c7591fa95e18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:16.906700   73294 system_pods.go:74] duration metric: took 11.592885ms to wait for pod list to return data ...
	I0603 12:07:16.906719   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:16.910038   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:16.910065   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:16.910079   73294 node_conditions.go:105] duration metric: took 3.350705ms to run NodePressure ...
	I0603 12:07:16.910101   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:17.203847   73294 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208169   73294 kubeadm.go:733] kubelet initialised
	I0603 12:07:17.208196   73294 kubeadm.go:734] duration metric: took 4.31857ms waiting for restarted kubelet to initialise ...
	I0603 12:07:17.208206   73294 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:17.213480   73294 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.227906   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227931   73294 pod_ready.go:81] duration metric: took 14.426593ms for pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.227941   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "coredns-7db6d8ff4d-wrw9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.227949   73294 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.231837   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231867   73294 pod_ready.go:81] duration metric: took 3.906779ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.231881   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.231890   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.238497   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238525   73294 pod_ready.go:81] duration metric: took 6.62644ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.238537   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.238557   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.298265   73294 pod_ready.go:97] node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298293   73294 pod_ready.go:81] duration metric: took 59.722372ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	E0603 12:07:17.298303   73294 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-196710" hosting pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-196710" has status "Ready":"False"
	I0603 12:07:17.298310   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098358   73294 pod_ready.go:92] pod "kube-proxy-84l9f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:18.098388   73294 pod_ready.go:81] duration metric: took 800.069928ms for pod "kube-proxy-84l9f" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:18.098401   73294 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:17.184410   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:17.184937   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | unable to find current IP address of domain old-k8s-version-905554 in network mk-old-k8s-version-905554
	I0603 12:07:17.184967   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | I0603 12:07:17.184893   74674 retry.go:31] will retry after 3.787322948s: waiting for machine to come up
	I0603 12:07:15.574365   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:17.575261   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:20.073582   73179 pod_ready.go:102] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.423964   72964 start.go:364] duration metric: took 54.978859199s to acquireMachinesLock for "embed-certs-725022"
	I0603 12:07:22.424033   72964 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:07:22.424044   72964 fix.go:54] fixHost starting: 
	I0603 12:07:22.424484   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:07:22.424521   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:07:22.446913   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0603 12:07:22.447356   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:07:22.447895   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:07:22.447926   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:07:22.448408   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:07:22.448648   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:22.448838   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:07:22.450953   72964 fix.go:112] recreateIfNeeded on embed-certs-725022: state=Stopped err=<nil>
	I0603 12:07:22.450977   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	W0603 12:07:22.451199   72964 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:07:22.513348   72964 out.go:177] * Restarting existing kvm2 VM for "embed-certs-725022" ...
	I0603 12:07:20.975695   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976290   73662 main.go:141] libmachine: (old-k8s-version-905554) Found IP for machine: 192.168.39.155
	I0603 12:07:20.976345   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.976358   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserving static IP address...
	I0603 12:07:20.976837   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.976864   73662 main.go:141] libmachine: (old-k8s-version-905554) Reserved static IP address: 192.168.39.155
	I0603 12:07:20.976883   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | skip adding static IP to network mk-old-k8s-version-905554 - found existing host DHCP lease matching {name: "old-k8s-version-905554", mac: "52:54:00:3d:ed:07", ip: "192.168.39.155"}
	I0603 12:07:20.976894   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Getting to WaitForSSH function...
	I0603 12:07:20.976902   73662 main.go:141] libmachine: (old-k8s-version-905554) Waiting for SSH to be available...
	I0603 12:07:20.978969   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979326   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:20.979361   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:20.979458   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH client type: external
	I0603 12:07:20.979488   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa (-rw-------)
	I0603 12:07:20.979525   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:20.979540   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | About to run SSH command:
	I0603 12:07:20.979564   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | exit 0
	I0603 12:07:21.103178   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:21.103557   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetConfigRaw
	I0603 12:07:21.104215   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.107017   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107397   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.107424   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.107619   73662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/config.json ...
	I0603 12:07:21.107782   73662 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:21.107809   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:21.107979   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.110021   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110389   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.110414   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.110540   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.110719   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.110880   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.111026   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.111239   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.111467   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.111484   73662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:21.219123   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:21.219148   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219379   73662 buildroot.go:166] provisioning hostname "old-k8s-version-905554"
	I0603 12:07:21.219403   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.219571   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.222603   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223000   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.223058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.223210   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.223406   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223573   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.223741   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.223926   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.224087   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.224099   73662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-905554 && echo "old-k8s-version-905554" | sudo tee /etc/hostname
	I0603 12:07:21.346108   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-905554
	
	I0603 12:07:21.346135   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.348801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349099   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.349129   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.349295   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.349498   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349680   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.349849   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.350036   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.350187   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.350204   73662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-905554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-905554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-905554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:21.467941   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:21.467970   73662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:21.467999   73662 buildroot.go:174] setting up certificates
	I0603 12:07:21.468008   73662 provision.go:84] configureAuth start
	I0603 12:07:21.468021   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetMachineName
	I0603 12:07:21.468308   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:21.470801   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471158   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.471185   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.471336   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.473733   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474058   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.474092   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.474276   73662 provision.go:143] copyHostCerts
	I0603 12:07:21.474355   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:21.474370   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:21.474429   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:21.474534   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:21.474546   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:21.474577   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:21.474645   73662 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:21.474654   73662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:21.474680   73662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:21.474738   73662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-905554 san=[127.0.0.1 192.168.39.155 localhost minikube old-k8s-version-905554]
	I0603 12:07:21.720184   73662 provision.go:177] copyRemoteCerts
	I0603 12:07:21.720251   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:21.720284   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.723338   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723752   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.723786   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.723993   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.724208   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.724394   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.724615   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:21.809640   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 12:07:21.834750   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:07:21.858691   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:21.887839   73662 provision.go:87] duration metric: took 419.817381ms to configureAuth
	I0603 12:07:21.887871   73662 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:21.888061   73662 config.go:182] Loaded profile config "old-k8s-version-905554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 12:07:21.888145   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:21.891350   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891737   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:21.891773   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:21.891933   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:21.892084   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892278   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:21.892447   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:21.892608   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:21.892822   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:21.892845   73662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:22.173662   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:22.173691   73662 machine.go:97] duration metric: took 1.065894044s to provisionDockerMachine
	I0603 12:07:22.173705   73662 start.go:293] postStartSetup for "old-k8s-version-905554" (driver="kvm2")
	I0603 12:07:22.173718   73662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:22.173738   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.174119   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:22.174154   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.176861   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177152   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.177184   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.177325   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.177505   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.177632   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.177764   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.263119   73662 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:22.269815   73662 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:22.269844   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:22.269937   73662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:22.270041   73662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:22.270320   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:22.284032   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:22.309226   73662 start.go:296] duration metric: took 135.507592ms for postStartSetup
	I0603 12:07:22.309267   73662 fix.go:56] duration metric: took 19.425215079s for fixHost
	I0603 12:07:22.309291   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.311759   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312031   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.312062   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.312244   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.312436   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312602   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.312740   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.312877   73662 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:22.313072   73662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0603 12:07:22.313088   73662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:22.423838   73662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416442.379680785
	
	I0603 12:07:22.423857   73662 fix.go:216] guest clock: 1717416442.379680785
	I0603 12:07:22.423864   73662 fix.go:229] Guest: 2024-06-03 12:07:22.379680785 +0000 UTC Remote: 2024-06-03 12:07:22.30927263 +0000 UTC m=+262.252197630 (delta=70.408155ms)
	I0603 12:07:22.423886   73662 fix.go:200] guest clock delta is within tolerance: 70.408155ms
	I0603 12:07:22.423892   73662 start.go:83] releasing machines lock for "old-k8s-version-905554", held for 19.539865965s
	I0603 12:07:22.423927   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.424202   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:22.427358   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.427799   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.427833   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.428006   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428619   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428817   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .DriverName
	I0603 12:07:22.428898   73662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:22.428951   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.429242   73662 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:22.429269   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHHostname
	I0603 12:07:22.431998   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432244   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432333   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432365   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.432608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.432779   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.432797   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:22.432818   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:22.433032   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHPort
	I0603 12:07:22.433044   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433244   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.433260   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHKeyPath
	I0603 12:07:22.433489   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetSSHUsername
	I0603 12:07:22.433629   73662 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/old-k8s-version-905554/id_rsa Username:docker}
	I0603 12:07:22.512743   73662 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:22.538343   73662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:22.691125   73662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:22.697547   73662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:22.697594   73662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:22.714213   73662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:22.714237   73662 start.go:494] detecting cgroup driver to use...
	I0603 12:07:22.714302   73662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:22.735173   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:22.749345   73662 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:22.749403   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:22.763133   73662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:22.776844   73662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:22.906859   73662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:23.071700   73662 docker.go:233] disabling docker service ...
	I0603 12:07:23.071767   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:23.088439   73662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:23.102097   73662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:23.238693   73662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:23.390561   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:23.410039   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:23.434983   73662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 12:07:23.435125   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.448358   73662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:23.448430   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.460973   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.473384   73662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:23.486096   73662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:23.498744   73662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:23.510913   73662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:23.510968   73662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:23.527740   73662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:23.542547   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:23.719963   73662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:23.875772   73662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:23.875843   73662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:23.882164   73662 start.go:562] Will wait 60s for crictl version
	I0603 12:07:23.882250   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:23.886841   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:23.933867   73662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:23.933952   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.965258   73662 ssh_runner.go:195] Run: crio --version
	I0603 12:07:23.995457   73662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 12:07:20.104355   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:22.104808   73294 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:23.106090   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:23.106109   73294 pod_ready.go:81] duration metric: took 5.007700483s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:23.106118   73294 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.514715   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Start
	I0603 12:07:22.514937   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring networks are active...
	I0603 12:07:22.515826   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network default is active
	I0603 12:07:22.516261   72964 main.go:141] libmachine: (embed-certs-725022) Ensuring network mk-embed-certs-725022 is active
	I0603 12:07:22.516748   72964 main.go:141] libmachine: (embed-certs-725022) Getting domain xml...
	I0603 12:07:22.517639   72964 main.go:141] libmachine: (embed-certs-725022) Creating domain...
	I0603 12:07:23.858964   72964 main.go:141] libmachine: (embed-certs-725022) Waiting to get IP...
	I0603 12:07:23.859920   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:23.860386   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:23.860418   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:23.860352   74834 retry.go:31] will retry after 246.280691ms: waiting for machine to come up
	I0603 12:07:24.108680   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.109222   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.109349   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.109272   74834 retry.go:31] will retry after 291.625816ms: waiting for machine to come up
	I0603 12:07:24.402895   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.403357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.403383   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.403319   74834 retry.go:31] will retry after 466.605521ms: waiting for machine to come up
	I0603 12:07:24.872278   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:24.872823   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:24.872847   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:24.872783   74834 retry.go:31] will retry after 382.19855ms: waiting for machine to come up
	I0603 12:07:23.996608   73662 main.go:141] libmachine: (old-k8s-version-905554) Calling .GetIP
	I0603 12:07:23.999648   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:23.999982   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ed:07", ip: ""} in network mk-old-k8s-version-905554: {Iface:virbr1 ExpiryTime:2024-06-03 13:07:14 +0000 UTC Type:0 Mac:52:54:00:3d:ed:07 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:old-k8s-version-905554 Clientid:01:52:54:00:3d:ed:07}
	I0603 12:07:24.000010   73662 main.go:141] libmachine: (old-k8s-version-905554) DBG | domain old-k8s-version-905554 has defined IP address 192.168.39.155 and MAC address 52:54:00:3d:ed:07 in network mk-old-k8s-version-905554
	I0603 12:07:24.000257   73662 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:24.004569   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:24.019027   73662 kubeadm.go:877] updating cluster {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:24.019206   73662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:07:24.019257   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:24.068916   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:24.069007   73662 ssh_runner.go:195] Run: which lz4
	I0603 12:07:24.074831   73662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:24.081154   73662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:24.081186   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 12:07:22.074657   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:07:22.074691   73179 pod_ready.go:81] duration metric: took 11.006759361s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:22.074706   73179 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:24.081308   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.114101   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:27.115528   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:25.256326   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.256830   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.256856   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.256779   74834 retry.go:31] will retry after 541.296238ms: waiting for machine to come up
	I0603 12:07:25.799738   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:25.800308   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:25.800340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:25.800260   74834 retry.go:31] will retry after 605.157326ms: waiting for machine to come up
	I0603 12:07:26.406748   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:26.407332   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:26.407357   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:26.407281   74834 retry.go:31] will retry after 830.816526ms: waiting for machine to come up
	I0603 12:07:27.239300   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:27.239746   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:27.239777   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:27.239708   74834 retry.go:31] will retry after 994.729433ms: waiting for machine to come up
	I0603 12:07:28.236261   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:28.236839   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:28.236865   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:28.236783   74834 retry.go:31] will retry after 1.756001067s: waiting for machine to come up
	I0603 12:07:25.794532   73662 crio.go:462] duration metric: took 1.71973848s to copy over tarball
	I0603 12:07:25.794618   73662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:28.897711   73662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103055845s)
	I0603 12:07:28.897742   73662 crio.go:469] duration metric: took 3.103177549s to extract the tarball
	I0603 12:07:28.897752   73662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:28.945269   73662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:28.982973   73662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 12:07:28.982998   73662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 12:07:28.983068   73662 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:28.983099   73662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.983134   73662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.983191   73662 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.983104   73662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.983282   73662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.983280   73662 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 12:07:28.983525   73662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.984988   73662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:28.985005   73662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:28.984997   73662 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 12:07:28.985007   73662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:28.985026   73662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:28.985190   73662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:28.985244   73662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:28.985288   73662 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:29.136387   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.155867   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.173686   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 12:07:29.181970   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.185877   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.188684   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.201080   73662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 12:07:29.201134   73662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.201174   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.252186   73662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 12:07:29.252232   73662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.252308   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.272578   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.306804   73662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 12:07:29.306856   73662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 12:07:29.306880   73662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 12:07:29.306901   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306915   73662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.306928   73662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 12:07:29.306952   73662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.306961   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.306988   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322141   73662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 12:07:29.322220   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 12:07:29.322238   73662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.322264   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.322210   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 12:07:29.378678   73662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 12:07:29.378717   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 12:07:29.378726   73662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.378775   73662 ssh_runner.go:195] Run: which crictl
	I0603 12:07:29.378831   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 12:07:29.378898   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 12:07:29.401173   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 12:07:29.401229   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 12:07:29.401396   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 12:07:29.450497   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 12:07:29.450531   73662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 12:07:29.488109   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 12:07:29.488191   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 12:07:29.504909   73662 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 12:07:29.931311   73662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:07:30.078311   73662 cache_images.go:92] duration metric: took 1.095295059s to LoadCachedImages
	W0603 12:07:30.078412   73662 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19008-7755/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 12:07:30.078431   73662 kubeadm.go:928] updating node { 192.168.39.155 8443 v1.20.0 crio true true} ...
	I0603 12:07:30.078568   73662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-905554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:30.078660   73662 ssh_runner.go:195] Run: crio config
	I0603 12:07:26.083566   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:28.084560   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.721426   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:32.114026   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:29.994115   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:29.994576   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:29.994654   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:29.994561   74834 retry.go:31] will retry after 1.667170312s: waiting for machine to come up
	I0603 12:07:31.664242   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:31.664797   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:31.664826   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:31.664752   74834 retry.go:31] will retry after 2.156675381s: waiting for machine to come up
	I0603 12:07:33.823700   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:33.824202   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:33.824241   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:33.824145   74834 retry.go:31] will retry after 3.067424613s: waiting for machine to come up
	I0603 12:07:30.129601   73662 cni.go:84] Creating CNI manager for ""
	I0603 12:07:30.180858   73662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:30.180884   73662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:30.180918   73662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-905554 NodeName:old-k8s-version-905554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 12:07:30.181104   73662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-905554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:30.181180   73662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 12:07:30.192139   73662 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:30.192202   73662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:30.202078   73662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0603 12:07:30.222968   73662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:30.242794   73662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 12:07:30.263578   73662 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:30.267535   73662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:30.280543   73662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:30.421251   73662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:30.441243   73662 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554 for IP: 192.168.39.155
	I0603 12:07:30.441269   73662 certs.go:194] generating shared ca certs ...
	I0603 12:07:30.441299   73662 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:30.441485   73662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:30.441546   73662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:30.441559   73662 certs.go:256] generating profile certs ...
	I0603 12:07:30.441675   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/client.key
	I0603 12:07:30.465464   73662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key.0d34b22c
	I0603 12:07:30.465562   73662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key
	I0603 12:07:30.465730   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:30.465775   73662 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:30.465787   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:30.465818   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:30.465855   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:30.465884   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:30.465941   73662 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:30.466831   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:30.517957   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:30.554072   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:30.610727   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:30.663149   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 12:07:30.702313   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:30.735841   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:30.761517   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/old-k8s-version-905554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:07:30.793872   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:30.821613   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:30.848030   73662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:30.875016   73662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:30.901749   73662 ssh_runner.go:195] Run: openssl version
	I0603 12:07:30.911485   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:30.923791   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928808   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.928858   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:30.934925   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:30.946930   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:30.958809   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963687   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.963748   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:30.969671   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:30.981918   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:30.994005   73662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999126   73662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:30.999190   73662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:31.005828   73662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:31.017320   73662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:31.021993   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:31.028420   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:31.034719   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:31.041565   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:31.048142   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:31.053992   73662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:31.060197   73662 kubeadm.go:391] StartCluster: {Name:old-k8s-version-905554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-905554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:31.060324   73662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:31.060361   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.102996   73662 cri.go:89] found id: ""
	I0603 12:07:31.103083   73662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:31.114546   73662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:31.114566   73662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:31.114573   73662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:31.114619   73662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:31.126042   73662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:31.127358   73662 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-905554" does not appear in /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:07:31.128029   73662 kubeconfig.go:62] /home/jenkins/minikube-integration/19008-7755/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-905554" cluster setting kubeconfig missing "old-k8s-version-905554" context setting]
	I0603 12:07:31.128862   73662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:31.247021   73662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:31.258013   73662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.155
	I0603 12:07:31.258054   73662 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:31.258065   73662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:31.258119   73662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:31.301991   73662 cri.go:89] found id: ""
	I0603 12:07:31.302065   73662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:31.326132   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:31.337333   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:31.337355   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:31.337396   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:31.347256   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:31.347300   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:31.357463   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:31.367810   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:31.367867   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:31.378092   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.388911   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:31.388959   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:31.400327   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:31.411937   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:31.411984   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:31.423929   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:31.435914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:31.563621   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:32.980144   73662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.416481613s)
	I0603 12:07:32.980178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.219383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.320755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:33.437964   73662 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:33.438070   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:33.938124   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.439012   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:34.938293   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:30.584019   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:33.081286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:35.081436   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:34.613763   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:37.112059   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.113186   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:36.892928   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:36.893405   72964 main.go:141] libmachine: (embed-certs-725022) DBG | unable to find current IP address of domain embed-certs-725022 in network mk-embed-certs-725022
	I0603 12:07:36.893432   72964 main.go:141] libmachine: (embed-certs-725022) DBG | I0603 12:07:36.893358   74834 retry.go:31] will retry after 3.786690644s: waiting for machine to come up
	I0603 12:07:35.438655   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:35.938894   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.438790   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:36.938720   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.438183   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.938442   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.438341   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:38.938738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.438262   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:39.938743   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:37.082484   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:39.580732   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:40.682151   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.682828   72964 main.go:141] libmachine: (embed-certs-725022) Found IP for machine: 192.168.72.245
	I0603 12:07:40.682854   72964 main.go:141] libmachine: (embed-certs-725022) Reserving static IP address...
	I0603 12:07:40.682870   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has current primary IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.683307   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.683347   72964 main.go:141] libmachine: (embed-certs-725022) DBG | skip adding static IP to network mk-embed-certs-725022 - found existing host DHCP lease matching {name: "embed-certs-725022", mac: "52:54:00:ba:41:8c", ip: "192.168.72.245"}
	I0603 12:07:40.683361   72964 main.go:141] libmachine: (embed-certs-725022) Reserved static IP address: 192.168.72.245
	I0603 12:07:40.683376   72964 main.go:141] libmachine: (embed-certs-725022) Waiting for SSH to be available...
	I0603 12:07:40.683392   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Getting to WaitForSSH function...
	I0603 12:07:40.685575   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.685946   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.685977   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.686080   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH client type: external
	I0603 12:07:40.686100   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa (-rw-------)
	I0603 12:07:40.686134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:07:40.686148   72964 main.go:141] libmachine: (embed-certs-725022) DBG | About to run SSH command:
	I0603 12:07:40.686161   72964 main.go:141] libmachine: (embed-certs-725022) DBG | exit 0
	I0603 12:07:40.811149   72964 main.go:141] libmachine: (embed-certs-725022) DBG | SSH cmd err, output: <nil>: 
	I0603 12:07:40.811536   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetConfigRaw
	I0603 12:07:40.812126   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:40.814686   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815141   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.815179   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.815390   72964 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/config.json ...
	I0603 12:07:40.815589   72964 machine.go:94] provisionDockerMachine start ...
	I0603 12:07:40.815607   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:40.815830   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.818127   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818454   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.818484   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.818622   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.818812   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.818964   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.819111   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.819244   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.819393   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.819402   72964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:07:40.923243   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:07:40.923272   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923539   72964 buildroot.go:166] provisioning hostname "embed-certs-725022"
	I0603 12:07:40.923568   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:40.923739   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:40.926340   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926743   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:40.926776   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:40.926892   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:40.927096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927259   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:40.927412   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:40.927570   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:40.927720   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:40.927737   72964 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-725022 && echo "embed-certs-725022" | sudo tee /etc/hostname
	I0603 12:07:41.045367   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-725022
	
	I0603 12:07:41.045392   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.048214   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048621   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.048653   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.048776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.048959   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049140   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.049270   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.049434   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.049729   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.049757   72964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-725022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-725022/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-725022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:07:41.160646   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:07:41.160671   72964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19008-7755/.minikube CaCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19008-7755/.minikube}
	I0603 12:07:41.160703   72964 buildroot.go:174] setting up certificates
	I0603 12:07:41.160715   72964 provision.go:84] configureAuth start
	I0603 12:07:41.160728   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetMachineName
	I0603 12:07:41.160998   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:41.163693   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164248   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.164280   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.164462   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.166598   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.166975   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.166999   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.167156   72964 provision.go:143] copyHostCerts
	I0603 12:07:41.167231   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem, removing ...
	I0603 12:07:41.167246   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem
	I0603 12:07:41.167311   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/ca.pem (1082 bytes)
	I0603 12:07:41.167503   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem, removing ...
	I0603 12:07:41.167516   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem
	I0603 12:07:41.167548   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/cert.pem (1123 bytes)
	I0603 12:07:41.167649   72964 exec_runner.go:144] found /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem, removing ...
	I0603 12:07:41.167660   72964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem
	I0603 12:07:41.167688   72964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19008-7755/.minikube/key.pem (1679 bytes)
	I0603 12:07:41.167767   72964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem org=jenkins.embed-certs-725022 san=[127.0.0.1 192.168.72.245 embed-certs-725022 localhost minikube]
	I0603 12:07:41.404074   72964 provision.go:177] copyRemoteCerts
	I0603 12:07:41.404201   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:07:41.404234   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.407206   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407582   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.407607   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.407790   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.408001   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.408187   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.408359   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.488870   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 12:07:41.513102   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:07:41.537653   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:07:41.561756   72964 provision.go:87] duration metric: took 401.027097ms to configureAuth
	I0603 12:07:41.561789   72964 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:07:41.561954   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:07:41.562020   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.564899   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565376   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.565416   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.565571   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.565754   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.565952   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.566096   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.566223   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.566408   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.566431   72964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:07:41.834677   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:07:41.834699   72964 machine.go:97] duration metric: took 1.019099045s to provisionDockerMachine
	I0603 12:07:41.834713   72964 start.go:293] postStartSetup for "embed-certs-725022" (driver="kvm2")
	I0603 12:07:41.834727   72964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:07:41.834746   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:41.835098   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:07:41.835139   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.838003   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838369   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.838398   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.838464   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.838655   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.838793   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.838932   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:41.922364   72964 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:07:41.926548   72964 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:07:41.926573   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/addons for local assets ...
	I0603 12:07:41.926649   72964 filesync.go:126] Scanning /home/jenkins/minikube-integration/19008-7755/.minikube/files for local assets ...
	I0603 12:07:41.926757   72964 filesync.go:149] local asset: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem -> 150282.pem in /etc/ssl/certs
	I0603 12:07:41.926853   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:07:41.937060   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:41.962618   72964 start.go:296] duration metric: took 127.891542ms for postStartSetup
	I0603 12:07:41.962650   72964 fix.go:56] duration metric: took 19.538606992s for fixHost
	I0603 12:07:41.962679   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:41.965879   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966201   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:41.966228   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:41.966409   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:41.966608   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966776   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:41.966939   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:41.967174   72964 main.go:141] libmachine: Using SSH client type: native
	I0603 12:07:41.967334   72964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0603 12:07:41.967345   72964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:07:42.067942   72964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717416462.037866239
	
	I0603 12:07:42.067964   72964 fix.go:216] guest clock: 1717416462.037866239
	I0603 12:07:42.067973   72964 fix.go:229] Guest: 2024-06-03 12:07:42.037866239 +0000 UTC Remote: 2024-06-03 12:07:41.962653397 +0000 UTC m=+357.104782857 (delta=75.212842ms)
	I0603 12:07:42.067997   72964 fix.go:200] guest clock delta is within tolerance: 75.212842ms
	I0603 12:07:42.068004   72964 start.go:83] releasing machines lock for "embed-certs-725022", held for 19.643998665s
	I0603 12:07:42.068026   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.068359   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:42.071337   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071783   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.071813   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.071980   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072618   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072806   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:07:42.072890   72964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:07:42.072943   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.073038   72964 ssh_runner.go:195] Run: cat /version.json
	I0603 12:07:42.073079   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:07:42.075688   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.075970   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076186   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076212   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076458   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076465   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:42.076501   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:42.076625   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076694   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:07:42.076815   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.076900   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:07:42.076993   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.077071   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:07:42.077227   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:07:42.178869   72964 ssh_runner.go:195] Run: systemctl --version
	I0603 12:07:42.184948   72964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:07:42.333045   72964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:07:42.339178   72964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:07:42.339249   72964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:07:42.356377   72964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:07:42.356399   72964 start.go:494] detecting cgroup driver to use...
	I0603 12:07:42.356453   72964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:07:42.374098   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:07:42.387377   72964 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:07:42.387429   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:07:42.400193   72964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:07:42.413009   72964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:07:42.524443   72964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:07:42.670114   72964 docker.go:233] disabling docker service ...
	I0603 12:07:42.670194   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:07:42.686085   72964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:07:42.699222   72964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:07:42.849018   72964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:07:42.987143   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:07:43.001493   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:07:43.020011   72964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:07:43.020077   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.030835   72964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:07:43.030903   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.041325   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.051229   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.061184   72964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:07:43.071245   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.082466   72964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.100381   72964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:07:43.112802   72964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:07:43.123404   72964 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:07:43.123452   72964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:07:43.136935   72964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:07:43.145996   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:43.269844   72964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:07:43.404166   72964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:07:43.404238   72964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:07:43.411376   72964 start.go:562] Will wait 60s for crictl version
	I0603 12:07:43.411419   72964 ssh_runner.go:195] Run: which crictl
	I0603 12:07:43.415081   72964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:07:43.455429   72964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:07:43.455514   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.483743   72964 ssh_runner.go:195] Run: crio --version
	I0603 12:07:43.516513   72964 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:07:41.613036   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.613398   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:43.517710   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetIP
	I0603 12:07:43.520057   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520336   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:07:43.520365   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:07:43.520579   72964 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 12:07:43.524653   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:43.537864   72964 kubeadm.go:877] updating cluster {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:07:43.537984   72964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:07:43.538045   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:43.574677   72964 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:07:43.574738   72964 ssh_runner.go:195] Run: which lz4
	I0603 12:07:43.579297   72964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:07:43.583831   72964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:07:43.583865   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:07:40.438270   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:40.938253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.438610   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.938408   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.438825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:42.938492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.439013   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:43.938232   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.438816   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:44.938476   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:41.581827   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:44.084271   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:46.113319   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.117970   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:45.006860   72964 crio.go:462] duration metric: took 1.427589912s to copy over tarball
	I0603 12:07:45.006945   72964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:07:47.289942   72964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282964729s)
	I0603 12:07:47.289966   72964 crio.go:469] duration metric: took 2.283075477s to extract the tarball
	I0603 12:07:47.289973   72964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:07:47.330106   72964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:07:47.377154   72964 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:07:47.377180   72964 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:07:47.377189   72964 kubeadm.go:928] updating node { 192.168.72.245 8443 v1.30.1 crio true true} ...
	I0603 12:07:47.377334   72964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-725022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:07:47.377416   72964 ssh_runner.go:195] Run: crio config
	I0603 12:07:47.436104   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:47.436125   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:47.436137   72964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:07:47.436165   72964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-725022 NodeName:embed-certs-725022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:07:47.436330   72964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-725022"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:07:47.436402   72964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:07:47.447427   72964 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:07:47.447498   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:07:47.459332   72964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:07:47.477962   72964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:07:47.495897   72964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 12:07:47.513033   72964 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0603 12:07:47.517042   72964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:07:47.529663   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:07:47.649313   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:07:47.666234   72964 certs.go:68] Setting up /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022 for IP: 192.168.72.245
	I0603 12:07:47.666258   72964 certs.go:194] generating shared ca certs ...
	I0603 12:07:47.666279   72964 certs.go:226] acquiring lock for ca certs: {Name:mk50092df3bdecbf959926adc377ee06b75aacd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:07:47.666440   72964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key
	I0603 12:07:47.666477   72964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key
	I0603 12:07:47.666487   72964 certs.go:256] generating profile certs ...
	I0603 12:07:47.666567   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/client.key
	I0603 12:07:47.666623   72964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key.8c3ea0d5
	I0603 12:07:47.666712   72964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key
	I0603 12:07:47.666874   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem (1338 bytes)
	W0603 12:07:47.666916   72964 certs.go:480] ignoring /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028_empty.pem, impossibly tiny 0 bytes
	I0603 12:07:47.666926   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:07:47.666947   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/ca.pem (1082 bytes)
	I0603 12:07:47.666968   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:07:47.666988   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/certs/key.pem (1679 bytes)
	I0603 12:07:47.667026   72964 certs.go:484] found cert: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem (1708 bytes)
	I0603 12:07:47.667721   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:07:47.705180   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:07:47.748552   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:07:47.780173   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:07:47.812902   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 12:07:47.844793   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:07:47.875181   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:07:47.899905   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/embed-certs-725022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:07:47.925039   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:07:47.950701   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/certs/15028.pem --> /usr/share/ca-certificates/15028.pem (1338 bytes)
	I0603 12:07:47.975798   72964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/ssl/certs/150282.pem --> /usr/share/ca-certificates/150282.pem (1708 bytes)
	I0603 12:07:48.002827   72964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:07:48.021050   72964 ssh_runner.go:195] Run: openssl version
	I0603 12:07:48.027977   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:07:48.043764   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050265   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.050315   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:07:48.056387   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:07:48.067816   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15028.pem && ln -fs /usr/share/ca-certificates/15028.pem /etc/ssl/certs/15028.pem"
	I0603 12:07:48.083715   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088813   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:51 /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.088870   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15028.pem
	I0603 12:07:48.094833   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15028.pem /etc/ssl/certs/51391683.0"
	I0603 12:07:48.108005   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150282.pem && ln -fs /usr/share/ca-certificates/150282.pem /etc/ssl/certs/150282.pem"
	I0603 12:07:48.120434   72964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125542   72964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:51 /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.125603   72964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150282.pem
	I0603 12:07:48.132060   72964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150282.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:07:48.143594   72964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:07:48.148392   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:07:48.154571   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:07:48.160573   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:07:48.167146   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:07:48.175232   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:07:48.182197   72964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:07:48.188588   72964 kubeadm.go:391] StartCluster: {Name:embed-certs-725022 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-725022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:07:48.188680   72964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:07:48.188733   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.229134   72964 cri.go:89] found id: ""
	I0603 12:07:48.229215   72964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 12:07:48.241663   72964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:07:48.241687   72964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:07:48.241692   72964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:07:48.241756   72964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:07:48.252641   72964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:07:48.253644   72964 kubeconfig.go:125] found "embed-certs-725022" server: "https://192.168.72.245:8443"
	I0603 12:07:48.255726   72964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:07:48.265816   72964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.245
	I0603 12:07:48.265849   72964 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:07:48.265862   72964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 12:07:48.265956   72964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:07:48.306408   72964 cri.go:89] found id: ""
	I0603 12:07:48.306471   72964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:07:48.324859   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:07:48.336076   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:07:48.336098   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:07:48.336159   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:07:48.347274   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:07:48.347328   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:07:48.358447   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:07:48.369460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:07:48.369509   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:07:48.379714   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.390460   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:07:48.390506   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:07:48.401178   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:07:48.411383   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:07:48.411423   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:07:48.421813   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:07:48.434585   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:48.561075   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.278187   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.504897   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.559494   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:49.634949   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:07:49.635051   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.438738   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:45.939144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.438431   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.938360   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.438811   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:47.938857   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.438849   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:48.938531   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.438876   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:49.938908   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:46.581939   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:48.584466   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.635461   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.112719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:50.135411   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.635951   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.136119   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.158722   72964 api_server.go:72] duration metric: took 1.52377732s to wait for apiserver process to appear ...
	I0603 12:07:51.158747   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:07:51.158767   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.082978   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.083005   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.083017   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.092290   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:07:54.092311   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:07:54.159522   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.173284   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.173308   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:54.658949   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:54.663966   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:54.663991   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:50.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.938952   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.439179   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:51.938804   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.438327   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:52.938677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:53.938976   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.438174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:54.938412   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:50.641189   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:53.081531   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.081845   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.159125   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.168267   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:07:55.168307   72964 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:07:55.658824   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:07:55.663523   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:07:55.670352   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:07:55.670383   72964 api_server.go:131] duration metric: took 4.511629799s to wait for apiserver health ...
	I0603 12:07:55.670391   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:07:55.670397   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:07:55.672360   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:07:55.113539   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:57.613236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.673720   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:07:55.686773   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:07:55.716937   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:07:55.729237   72964 system_pods.go:59] 8 kube-system pods found
	I0603 12:07:55.729267   72964 system_pods.go:61] "coredns-7db6d8ff4d-thrfl" [efc31931-5040-4bb9-92e0-cdda477b38b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:07:55.729274   72964 system_pods.go:61] "etcd-embed-certs-725022" [47be7787-e8ae-4a63-9209-943edeec91b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:07:55.729281   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [2812f362-ddb8-4f45-bdfe-ba5d90f3b33f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:07:55.729287   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [97666e49-31ac-41c0-a49c-0db51d6c07b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:07:55.729294   72964 system_pods.go:61] "kube-proxy-d5ztj" [854c88f3-f0ab-4885-95a0-8134db48fc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:07:55.729300   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [df602caf-2ca4-4963-b724-5a6e8de65c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:07:55.729306   72964 system_pods.go:61] "metrics-server-569cc877fc-8jrnd" [3087c05b-9a8e-4bf7-bbe7-79f3c5540bf7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:07:55.729313   72964 system_pods.go:61] "storage-provisioner" [68eeb37a-7098-4e87-8384-3399c2bbc583] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:07:55.729319   72964 system_pods.go:74] duration metric: took 12.368001ms to wait for pod list to return data ...
	I0603 12:07:55.729329   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:07:55.733006   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:07:55.733024   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:07:55.733033   72964 node_conditions.go:105] duration metric: took 3.699303ms to run NodePressure ...
	I0603 12:07:55.733047   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:07:56.040149   72964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050355   72964 kubeadm.go:733] kubelet initialised
	I0603 12:07:56.050376   72964 kubeadm.go:734] duration metric: took 10.199837ms waiting for restarted kubelet to initialise ...
	I0603 12:07:56.050383   72964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:07:56.055536   72964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:07:58.062682   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:07:55.438798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:55.938263   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.438870   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:56.938915   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.438799   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.938972   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.438367   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:58.939045   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.439020   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:59.938716   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:07:57.581813   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.080226   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.113886   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.613795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.062724   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:02.062937   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.565302   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:00.438789   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:00.938973   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.439098   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:01.938892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.438978   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.938317   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.438969   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:03.938274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.438255   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:04.938545   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:02.081713   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:04.082219   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:05.112940   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.113191   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:07.075333   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:07.075361   72964 pod_ready.go:81] duration metric: took 11.019801293s for pod "coredns-7db6d8ff4d-thrfl" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:07.075375   72964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583435   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.583459   72964 pod_ready.go:81] duration metric: took 1.508076213s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.583468   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588791   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.588817   72964 pod_ready.go:81] duration metric: took 5.342068ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.588836   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593258   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.593279   72964 pod_ready.go:81] duration metric: took 4.43483ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.593292   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601106   72964 pod_ready.go:92] pod "kube-proxy-d5ztj" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.601125   72964 pod_ready.go:81] duration metric: took 7.826962ms for pod "kube-proxy-d5ztj" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.601133   72964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660242   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:08:08.660275   72964 pod_ready.go:81] duration metric: took 59.134528ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:08.660297   72964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	I0603 12:08:05.438368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:05.938174   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.939167   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.438451   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:07.938651   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.438892   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:08.938182   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.438548   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:09.938352   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:06.580980   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:08.583476   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:09.612231   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:11.613131   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:14.115179   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.667171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.166284   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:10.438932   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:10.938156   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.438911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.939064   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.438578   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:12.938389   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.438469   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:13.939000   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.438219   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:14.938949   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:11.081492   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:13.581052   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:16.612649   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.112795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.166468   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.166591   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.666737   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:15.438709   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.938471   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.438909   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:16.939131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.438995   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:17.938810   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.438615   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:18.938920   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.438966   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:19.938696   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:15.581276   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:17.581764   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:19.582048   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.116274   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.613288   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:21.667736   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:23.667798   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:20.438818   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:20.938625   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.439129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:21.938488   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.438452   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.938328   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.438557   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:23.938427   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.438391   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:24.939088   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:22.080444   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:24.081387   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.113843   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.612076   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:26.165833   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.169171   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:25.439153   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:25.939073   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.438157   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.938755   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.438244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:27.938149   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.439131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:28.938855   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.439027   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:29.938159   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:26.081716   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:28.582162   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.613632   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.111746   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.667602   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.168233   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:30.438727   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:30.938281   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.438203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:31.938903   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.438731   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:32.938479   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:33.438133   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:33.438202   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:33.480006   73662 cri.go:89] found id: ""
	I0603 12:08:33.480044   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.480056   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:33.480066   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:33.480126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:33.519446   73662 cri.go:89] found id: ""
	I0603 12:08:33.519469   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.519476   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:33.519480   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:33.519536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:33.553602   73662 cri.go:89] found id: ""
	I0603 12:08:33.553624   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.553631   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:33.553637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:33.553692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:33.588061   73662 cri.go:89] found id: ""
	I0603 12:08:33.588085   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.588094   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:33.588103   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:33.588155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:33.623960   73662 cri.go:89] found id: ""
	I0603 12:08:33.623983   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.623993   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:33.624000   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:33.624071   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:33.658829   73662 cri.go:89] found id: ""
	I0603 12:08:33.658873   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.658885   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:33.658893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:33.658956   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:33.699501   73662 cri.go:89] found id: ""
	I0603 12:08:33.699526   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.699536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:33.699544   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:33.699601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:33.732293   73662 cri.go:89] found id: ""
	I0603 12:08:33.732327   73662 logs.go:276] 0 containers: []
	W0603 12:08:33.732338   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:33.732348   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:33.732361   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:33.783990   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:33.784027   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:33.800684   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:33.800711   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:33.939661   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:33.939685   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:33.939699   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:34.006442   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:34.006473   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:31.081400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:33.582139   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.112488   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:37.113080   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:35.666988   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.166862   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:36.549129   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:36.562476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:36.562536   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:36.600035   73662 cri.go:89] found id: ""
	I0603 12:08:36.600074   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.600084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:36.600091   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:36.600147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:36.661954   73662 cri.go:89] found id: ""
	I0603 12:08:36.661981   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.661989   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:36.661996   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:36.662082   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:36.699538   73662 cri.go:89] found id: ""
	I0603 12:08:36.699561   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.699569   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:36.699574   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:36.699619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:36.735256   73662 cri.go:89] found id: ""
	I0603 12:08:36.735283   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.735291   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:36.735296   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:36.735356   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:36.779862   73662 cri.go:89] found id: ""
	I0603 12:08:36.779888   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.779895   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:36.779900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:36.779946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:36.818146   73662 cri.go:89] found id: ""
	I0603 12:08:36.818180   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.818190   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:36.818198   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:36.818256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:36.855408   73662 cri.go:89] found id: ""
	I0603 12:08:36.855436   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.855447   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:36.855455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:36.855521   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:36.891656   73662 cri.go:89] found id: ""
	I0603 12:08:36.891686   73662 logs.go:276] 0 containers: []
	W0603 12:08:36.891697   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:36.891709   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:36.891725   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:36.937992   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:36.938025   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:36.992422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:36.992456   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:37.007064   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:37.007093   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:37.088103   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:37.088124   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:37.088136   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:39.660794   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:39.674617   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:39.674694   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:39.711446   73662 cri.go:89] found id: ""
	I0603 12:08:39.711482   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.711493   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:39.711501   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:39.711565   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:39.745918   73662 cri.go:89] found id: ""
	I0603 12:08:39.745947   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.745957   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:39.745964   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:39.746013   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:39.780713   73662 cri.go:89] found id: ""
	I0603 12:08:39.780739   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.780760   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:39.780777   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:39.780839   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:39.815657   73662 cri.go:89] found id: ""
	I0603 12:08:39.815685   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.815696   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:39.815703   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:39.815769   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:39.849403   73662 cri.go:89] found id: ""
	I0603 12:08:39.849439   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.849449   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:39.849456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:39.849524   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:39.884830   73662 cri.go:89] found id: ""
	I0603 12:08:39.884876   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.884887   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:39.884894   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:39.884954   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:39.917820   73662 cri.go:89] found id: ""
	I0603 12:08:39.917853   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.917863   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:39.917871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:39.917928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:39.955294   73662 cri.go:89] found id: ""
	I0603 12:08:39.955330   73662 logs.go:276] 0 containers: []
	W0603 12:08:39.955340   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:39.955350   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:39.955364   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:39.997553   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:39.997577   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:40.052216   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:40.052251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:40.066377   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:40.066405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:08:36.080739   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:38.580681   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:39.611998   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:41.613058   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.112634   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:40.168134   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:42.666329   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:44.666738   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:08:40.145631   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:40.145653   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:40.145668   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:42.718782   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:42.732121   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:42.732197   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:42.766418   73662 cri.go:89] found id: ""
	I0603 12:08:42.766443   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.766451   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:42.766456   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:42.766503   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:42.809790   73662 cri.go:89] found id: ""
	I0603 12:08:42.809821   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.809830   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:42.809836   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:42.809893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:42.843410   73662 cri.go:89] found id: ""
	I0603 12:08:42.843439   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.843446   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:42.843456   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:42.843510   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:42.879150   73662 cri.go:89] found id: ""
	I0603 12:08:42.879177   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.879186   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:42.879193   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:42.879256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:42.914565   73662 cri.go:89] found id: ""
	I0603 12:08:42.914598   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.914609   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:42.914616   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:42.914680   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:42.949467   73662 cri.go:89] found id: ""
	I0603 12:08:42.949496   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.949506   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:42.949513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:42.949563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:42.984235   73662 cri.go:89] found id: ""
	I0603 12:08:42.984257   73662 logs.go:276] 0 containers: []
	W0603 12:08:42.984264   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:42.984269   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:42.984314   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:43.027786   73662 cri.go:89] found id: ""
	I0603 12:08:43.027816   73662 logs.go:276] 0 containers: []
	W0603 12:08:43.027827   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:43.027838   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:43.027852   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:43.099184   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:43.099212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:43.124733   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:43.124755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:43.194716   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:43.194741   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:43.194759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:43.275948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:43.275982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:41.080968   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:43.081892   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.082261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:46.113795   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:48.612577   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:47.166497   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:49.167122   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:45.819178   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:45.832301   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:45.832391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:45.867947   73662 cri.go:89] found id: ""
	I0603 12:08:45.867979   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.867990   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:45.867998   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:45.868050   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:45.909498   73662 cri.go:89] found id: ""
	I0603 12:08:45.909529   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.909541   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:45.909552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:45.909614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:45.942313   73662 cri.go:89] found id: ""
	I0603 12:08:45.942343   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.942353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:45.942361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:45.942425   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:45.976217   73662 cri.go:89] found id: ""
	I0603 12:08:45.976246   73662 logs.go:276] 0 containers: []
	W0603 12:08:45.976254   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:45.976260   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:45.976306   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:46.010553   73662 cri.go:89] found id: ""
	I0603 12:08:46.010583   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.010593   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:46.010599   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:46.010675   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:46.048459   73662 cri.go:89] found id: ""
	I0603 12:08:46.048481   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.048489   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:46.048495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:46.048540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:46.084823   73662 cri.go:89] found id: ""
	I0603 12:08:46.084852   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.084862   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:46.084869   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:46.084920   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:46.129011   73662 cri.go:89] found id: ""
	I0603 12:08:46.129036   73662 logs.go:276] 0 containers: []
	W0603 12:08:46.129046   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:46.129055   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:46.129069   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:46.144145   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:46.144179   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:46.213800   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:46.213826   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:46.213841   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:46.294423   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:46.294453   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:46.334408   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:46.334436   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:48.888798   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:48.901815   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:48.901876   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:48.935266   73662 cri.go:89] found id: ""
	I0603 12:08:48.935290   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.935301   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:48.935308   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:48.935375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:48.969640   73662 cri.go:89] found id: ""
	I0603 12:08:48.969666   73662 logs.go:276] 0 containers: []
	W0603 12:08:48.969673   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:48.969678   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:48.969739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:49.003697   73662 cri.go:89] found id: ""
	I0603 12:08:49.003725   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.003736   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:49.003743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:49.003800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:49.037808   73662 cri.go:89] found id: ""
	I0603 12:08:49.037837   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.037847   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:49.037879   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:49.037947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:49.071844   73662 cri.go:89] found id: ""
	I0603 12:08:49.071875   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.071885   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:49.071892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:49.071952   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:49.107907   73662 cri.go:89] found id: ""
	I0603 12:08:49.107934   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.107945   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:49.107952   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:49.108012   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:49.144847   73662 cri.go:89] found id: ""
	I0603 12:08:49.144869   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.144876   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:49.144882   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:49.144944   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:49.183910   73662 cri.go:89] found id: ""
	I0603 12:08:49.183931   73662 logs.go:276] 0 containers: []
	W0603 12:08:49.183940   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:49.183951   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:49.183964   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:49.237344   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:49.237376   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:49.251612   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:49.251636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:49.317211   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:49.317236   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:49.317251   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:49.394414   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:49.394455   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:47.581577   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:50.080726   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.112151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:53.112224   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.666596   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.166060   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:51.937686   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:51.950390   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:51.950466   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:51.984341   73662 cri.go:89] found id: ""
	I0603 12:08:51.984365   73662 logs.go:276] 0 containers: []
	W0603 12:08:51.984372   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:51.984378   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:51.984426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.017828   73662 cri.go:89] found id: ""
	I0603 12:08:52.017857   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.017866   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:52.017872   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:52.017918   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:52.057283   73662 cri.go:89] found id: ""
	I0603 12:08:52.057314   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.057324   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:52.057331   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:52.057391   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:52.102270   73662 cri.go:89] found id: ""
	I0603 12:08:52.102303   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.102313   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:52.102321   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:52.102383   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:52.137361   73662 cri.go:89] found id: ""
	I0603 12:08:52.137386   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.137393   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:52.137399   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:52.137463   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:52.171765   73662 cri.go:89] found id: ""
	I0603 12:08:52.171791   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.171800   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:52.171807   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:52.171854   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:52.204688   73662 cri.go:89] found id: ""
	I0603 12:08:52.204715   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.204722   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:52.204728   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:52.204780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:52.242547   73662 cri.go:89] found id: ""
	I0603 12:08:52.242571   73662 logs.go:276] 0 containers: []
	W0603 12:08:52.242579   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:52.242586   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:52.242599   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:52.319089   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:52.319122   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:52.360879   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:52.360910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:52.413601   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:52.413641   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:52.428336   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:52.428370   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:52.500089   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.001244   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:55.015217   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:55.015286   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:55.055825   73662 cri.go:89] found id: ""
	I0603 12:08:55.055906   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.055922   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:55.055930   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:55.055993   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:52.080957   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:54.081055   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.113083   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:57.612727   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:56.166588   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.167503   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:55.092456   73662 cri.go:89] found id: ""
	I0603 12:08:55.093688   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.093711   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:55.093723   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:55.093787   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:55.131165   73662 cri.go:89] found id: ""
	I0603 12:08:55.131193   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.131203   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:55.131210   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:55.131260   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:55.168170   73662 cri.go:89] found id: ""
	I0603 12:08:55.168188   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.168194   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:55.168200   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:55.168247   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:55.203409   73662 cri.go:89] found id: ""
	I0603 12:08:55.203434   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.203441   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:55.203446   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:55.203491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:55.239971   73662 cri.go:89] found id: ""
	I0603 12:08:55.239997   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.240009   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:55.240016   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:55.240077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:55.275115   73662 cri.go:89] found id: ""
	I0603 12:08:55.275144   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.275154   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:55.275162   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:55.275221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:55.309384   73662 cri.go:89] found id: ""
	I0603 12:08:55.309414   73662 logs.go:276] 0 containers: []
	W0603 12:08:55.309425   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:55.309435   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:55.309451   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:55.323455   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:55.323485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:55.397581   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:55.397606   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:55.397617   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:55.473046   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:55.473079   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:55.515248   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:55.515282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.067416   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:08:58.081175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:08:58.081241   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:08:58.121654   73662 cri.go:89] found id: ""
	I0603 12:08:58.121680   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.121691   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:08:58.121698   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:08:58.121774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:08:58.159599   73662 cri.go:89] found id: ""
	I0603 12:08:58.159623   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.159631   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:08:58.159636   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:08:58.159689   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:08:58.197518   73662 cri.go:89] found id: ""
	I0603 12:08:58.197545   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.197553   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:08:58.197558   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:08:58.197603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:08:58.232433   73662 cri.go:89] found id: ""
	I0603 12:08:58.232463   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.232474   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:08:58.232479   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:08:58.232529   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:08:58.268209   73662 cri.go:89] found id: ""
	I0603 12:08:58.268234   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.268242   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:08:58.268248   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:08:58.268307   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:08:58.302091   73662 cri.go:89] found id: ""
	I0603 12:08:58.302118   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.302129   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:08:58.302136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:08:58.302195   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:08:58.336539   73662 cri.go:89] found id: ""
	I0603 12:08:58.336567   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.336574   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:08:58.336579   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:08:58.336627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:08:58.369263   73662 cri.go:89] found id: ""
	I0603 12:08:58.369294   73662 logs.go:276] 0 containers: []
	W0603 12:08:58.369305   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:08:58.369316   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:08:58.369329   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:08:58.408651   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:08:58.408683   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:08:58.463551   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:08:58.463578   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:08:58.478781   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:08:58.478808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:08:58.556604   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:08:58.556631   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:08:58.556646   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:08:56.580284   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:08:58.582526   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.112533   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.113462   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:00.666282   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:02.666684   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.666822   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:01.135368   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:01.148448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:01.148517   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:01.184913   73662 cri.go:89] found id: ""
	I0603 12:09:01.184936   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.184947   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:01.184955   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:01.185017   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:01.221508   73662 cri.go:89] found id: ""
	I0603 12:09:01.221538   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.221547   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:01.221552   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:01.221613   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:01.256588   73662 cri.go:89] found id: ""
	I0603 12:09:01.256617   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.256627   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:01.256634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:01.256696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:01.292874   73662 cri.go:89] found id: ""
	I0603 12:09:01.292898   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.292906   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:01.292913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:01.292957   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:01.330607   73662 cri.go:89] found id: ""
	I0603 12:09:01.330636   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.330646   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:01.330652   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:01.330698   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:01.366053   73662 cri.go:89] found id: ""
	I0603 12:09:01.366090   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.366102   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:01.366110   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:01.366168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:01.403446   73662 cri.go:89] found id: ""
	I0603 12:09:01.403476   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.403489   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:01.403495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:01.403558   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:01.445413   73662 cri.go:89] found id: ""
	I0603 12:09:01.445444   73662 logs.go:276] 0 containers: []
	W0603 12:09:01.445456   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:01.445467   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:01.445485   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:01.521804   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:01.521831   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:01.521846   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:01.601841   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:01.601869   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.642642   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:01.642685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:01.700512   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:01.700547   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.216853   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:04.229827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:04.229910   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:04.265194   73662 cri.go:89] found id: ""
	I0603 12:09:04.265223   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.265230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:04.265235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:04.265294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:04.301157   73662 cri.go:89] found id: ""
	I0603 12:09:04.301186   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.301193   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:04.301199   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:04.301249   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:04.335992   73662 cri.go:89] found id: ""
	I0603 12:09:04.336014   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.336024   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:04.336031   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:04.336090   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:04.371342   73662 cri.go:89] found id: ""
	I0603 12:09:04.371375   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.371386   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:04.371393   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:04.371452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:04.406439   73662 cri.go:89] found id: ""
	I0603 12:09:04.406466   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.406476   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:04.406483   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:04.406540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:04.438426   73662 cri.go:89] found id: ""
	I0603 12:09:04.438448   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.438458   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:04.438467   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:04.438525   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:04.471465   73662 cri.go:89] found id: ""
	I0603 12:09:04.471494   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.471504   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:04.471512   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:04.471576   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:04.507994   73662 cri.go:89] found id: ""
	I0603 12:09:04.508016   73662 logs.go:276] 0 containers: []
	W0603 12:09:04.508023   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:04.508031   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:04.508042   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:04.558973   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:04.559007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:04.576157   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:04.576190   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:04.653262   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:04.653282   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:04.653293   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:04.732195   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:04.732228   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:01.081232   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:03.083123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:05.083243   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:04.612842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.113160   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:06.667720   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.167160   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:07.282253   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:07.296478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:07.296549   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:07.331591   73662 cri.go:89] found id: ""
	I0603 12:09:07.331614   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.331621   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:07.331626   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:07.331676   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:07.367333   73662 cri.go:89] found id: ""
	I0603 12:09:07.367356   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.367363   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:07.367369   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:07.367426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:07.406446   73662 cri.go:89] found id: ""
	I0603 12:09:07.406471   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.406479   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:07.406485   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:07.406544   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:07.441610   73662 cri.go:89] found id: ""
	I0603 12:09:07.441632   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.441640   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:07.441646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:07.441699   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:07.476479   73662 cri.go:89] found id: ""
	I0603 12:09:07.476501   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.476508   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:07.476513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:07.476586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:07.513712   73662 cri.go:89] found id: ""
	I0603 12:09:07.513740   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.513750   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:07.513758   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:07.513816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:07.552169   73662 cri.go:89] found id: ""
	I0603 12:09:07.552195   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.552206   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:07.552213   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:07.552274   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:07.591926   73662 cri.go:89] found id: ""
	I0603 12:09:07.591950   73662 logs.go:276] 0 containers: []
	W0603 12:09:07.591956   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:07.591963   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:07.591974   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:07.672408   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:07.672429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:07.672440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:07.752948   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:07.752980   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:07.791942   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:07.791975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:07.849187   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:07.849222   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:07.586314   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.082310   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:09.612757   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.612893   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:13.613395   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:11.669965   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.165493   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:10.364466   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:10.377895   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:10.377967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:10.412039   73662 cri.go:89] found id: ""
	I0603 12:09:10.412062   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.412070   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:10.412082   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:10.412137   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:10.444562   73662 cri.go:89] found id: ""
	I0603 12:09:10.444585   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.444594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:10.444602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:10.444657   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:10.479651   73662 cri.go:89] found id: ""
	I0603 12:09:10.479674   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.479681   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:10.479687   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:10.479742   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:10.518978   73662 cri.go:89] found id: ""
	I0603 12:09:10.519000   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.519011   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:10.519019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:10.519100   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:10.553848   73662 cri.go:89] found id: ""
	I0603 12:09:10.553873   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.553880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:10.553885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:10.553933   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:10.592081   73662 cri.go:89] found id: ""
	I0603 12:09:10.592107   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.592116   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:10.592124   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:10.592176   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:10.629138   73662 cri.go:89] found id: ""
	I0603 12:09:10.629164   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.629175   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:10.629181   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:10.629233   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:10.666660   73662 cri.go:89] found id: ""
	I0603 12:09:10.666686   73662 logs.go:276] 0 containers: []
	W0603 12:09:10.666695   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:10.666705   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:10.666723   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:10.747856   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:10.747892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:10.792403   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:10.792442   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:10.844484   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:10.844520   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:10.857822   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:10.857848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:10.927434   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.428260   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:13.442354   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:13.442418   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:13.480908   73662 cri.go:89] found id: ""
	I0603 12:09:13.480938   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.480948   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:13.480953   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:13.481002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:13.513942   73662 cri.go:89] found id: ""
	I0603 12:09:13.513966   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.513979   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:13.513985   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:13.514042   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:13.548849   73662 cri.go:89] found id: ""
	I0603 12:09:13.548881   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.548892   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:13.548900   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:13.548961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:13.587857   73662 cri.go:89] found id: ""
	I0603 12:09:13.587880   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.587887   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:13.587893   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:13.587941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:13.623386   73662 cri.go:89] found id: ""
	I0603 12:09:13.623408   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.623415   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:13.623421   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:13.623473   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:13.662721   73662 cri.go:89] found id: ""
	I0603 12:09:13.662755   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.662774   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:13.662782   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:13.662847   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:13.697244   73662 cri.go:89] found id: ""
	I0603 12:09:13.697272   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.697279   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:13.697284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:13.697342   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:13.734987   73662 cri.go:89] found id: ""
	I0603 12:09:13.735014   73662 logs.go:276] 0 containers: []
	W0603 12:09:13.735020   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:13.735030   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:13.735055   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:13.792422   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:13.792463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:13.807174   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:13.807220   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:13.880940   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:13.880962   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:13.880976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:13.970760   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:13.970800   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:12.581261   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:14.581335   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.113403   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.113699   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.166578   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:18.167436   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:16.519306   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:16.534161   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:16.534213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:16.571503   73662 cri.go:89] found id: ""
	I0603 12:09:16.571533   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.571544   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:16.571553   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:16.571603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:16.610388   73662 cri.go:89] found id: ""
	I0603 12:09:16.610425   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.610434   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:16.610442   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:16.610501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:16.654132   73662 cri.go:89] found id: ""
	I0603 12:09:16.654173   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.654184   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:16.654196   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:16.654288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:16.695091   73662 cri.go:89] found id: ""
	I0603 12:09:16.695120   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.695130   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:16.695137   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:16.695198   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:16.729916   73662 cri.go:89] found id: ""
	I0603 12:09:16.729941   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.729950   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:16.729958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:16.730019   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:16.763653   73662 cri.go:89] found id: ""
	I0603 12:09:16.763675   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.763683   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:16.763688   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:16.763734   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:16.801834   73662 cri.go:89] found id: ""
	I0603 12:09:16.801867   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.801877   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:16.801885   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:16.801946   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:16.836959   73662 cri.go:89] found id: ""
	I0603 12:09:16.836983   73662 logs.go:276] 0 containers: []
	W0603 12:09:16.836995   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:16.837006   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:16.837023   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:16.850264   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:16.850294   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:16.943870   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:16.943897   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:16.943914   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:17.028230   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:17.028269   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:17.071944   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:17.071975   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:19.627246   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:19.641441   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:19.641513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:19.680111   73662 cri.go:89] found id: ""
	I0603 12:09:19.680135   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.680144   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:19.680152   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:19.680210   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:19.717357   73662 cri.go:89] found id: ""
	I0603 12:09:19.717386   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.717396   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:19.717403   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:19.717467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:19.753540   73662 cri.go:89] found id: ""
	I0603 12:09:19.753567   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.753575   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:19.753581   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:19.753627   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:19.790421   73662 cri.go:89] found id: ""
	I0603 12:09:19.790454   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.790466   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:19.790474   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:19.790532   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:19.828908   73662 cri.go:89] found id: ""
	I0603 12:09:19.828932   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.828940   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:19.828946   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:19.829007   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:19.864576   73662 cri.go:89] found id: ""
	I0603 12:09:19.864609   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.864618   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:19.864624   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:19.864679   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:19.899294   73662 cri.go:89] found id: ""
	I0603 12:09:19.899317   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.899324   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:19.899330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:19.899397   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:19.933855   73662 cri.go:89] found id: ""
	I0603 12:09:19.933883   73662 logs.go:276] 0 containers: []
	W0603 12:09:19.933894   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:19.933905   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:19.933920   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:19.972676   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:19.972703   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:20.025882   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:20.025913   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:20.040706   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:20.040733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:09:17.080807   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:19.581996   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.612561   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.112691   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:20.667356   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:23.167076   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	W0603 12:09:20.115483   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:20.115506   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:20.115521   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:22.692138   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:22.706079   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:22.706155   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:22.742755   73662 cri.go:89] found id: ""
	I0603 12:09:22.742776   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.742784   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:22.742789   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:22.742845   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:22.779522   73662 cri.go:89] found id: ""
	I0603 12:09:22.779549   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.779557   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:22.779563   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:22.779615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:22.813864   73662 cri.go:89] found id: ""
	I0603 12:09:22.813892   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.813902   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:22.813909   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:22.813967   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:22.848111   73662 cri.go:89] found id: ""
	I0603 12:09:22.848138   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.848149   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:22.848157   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:22.848213   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:22.899733   73662 cri.go:89] found id: ""
	I0603 12:09:22.899765   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.899775   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:22.899781   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:22.899846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:22.941237   73662 cri.go:89] found id: ""
	I0603 12:09:22.941266   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.941276   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:22.941282   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:22.941330   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:22.981500   73662 cri.go:89] found id: ""
	I0603 12:09:22.981523   73662 logs.go:276] 0 containers: []
	W0603 12:09:22.981531   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:22.981536   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:22.981580   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:23.016893   73662 cri.go:89] found id: ""
	I0603 12:09:23.016921   73662 logs.go:276] 0 containers: []
	W0603 12:09:23.016933   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:23.016943   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:23.016958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:23.056019   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:23.056052   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:23.112565   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:23.112594   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:23.127475   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:23.127504   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:23.204939   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:23.204959   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:23.204971   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:21.584829   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:24.081361   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.112860   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.113465   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.114788   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.167597   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:27.666395   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:29.668658   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:25.781506   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:25.794896   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:25.794971   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:25.831669   73662 cri.go:89] found id: ""
	I0603 12:09:25.831699   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.831710   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:25.831718   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:25.831775   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:25.865198   73662 cri.go:89] found id: ""
	I0603 12:09:25.865224   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.865233   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:25.865241   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:25.865296   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:25.900280   73662 cri.go:89] found id: ""
	I0603 12:09:25.900316   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.900339   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:25.900347   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:25.900409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:25.934727   73662 cri.go:89] found id: ""
	I0603 12:09:25.934759   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.934770   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:25.934778   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:25.934837   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:25.970760   73662 cri.go:89] found id: ""
	I0603 12:09:25.970785   73662 logs.go:276] 0 containers: []
	W0603 12:09:25.970795   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:25.970800   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:25.970846   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:26.005580   73662 cri.go:89] found id: ""
	I0603 12:09:26.005608   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.005617   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:26.005622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:26.005670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:26.042168   73662 cri.go:89] found id: ""
	I0603 12:09:26.042192   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.042200   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:26.042206   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:26.042256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:26.081180   73662 cri.go:89] found id: ""
	I0603 12:09:26.081211   73662 logs.go:276] 0 containers: []
	W0603 12:09:26.081226   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:26.081237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:26.081252   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:26.156298   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:26.156320   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:26.156333   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:26.241945   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:26.241976   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.282363   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:26.282391   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:26.336717   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:26.336747   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:28.851601   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:28.865866   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:28.865930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:28.901850   73662 cri.go:89] found id: ""
	I0603 12:09:28.901877   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.901884   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:28.901890   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:28.901953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:28.939384   73662 cri.go:89] found id: ""
	I0603 12:09:28.939414   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.939431   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:28.939438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:28.939501   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:28.974836   73662 cri.go:89] found id: ""
	I0603 12:09:28.974859   73662 logs.go:276] 0 containers: []
	W0603 12:09:28.974866   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:28.974872   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:28.974929   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:29.020057   73662 cri.go:89] found id: ""
	I0603 12:09:29.020082   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.020090   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:29.020095   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:29.020154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:29.065836   73662 cri.go:89] found id: ""
	I0603 12:09:29.065868   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.065880   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:29.065887   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:29.065948   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:29.103326   73662 cri.go:89] found id: ""
	I0603 12:09:29.103352   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.103362   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:29.103369   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:29.103432   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:29.141516   73662 cri.go:89] found id: ""
	I0603 12:09:29.141543   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.141554   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:29.141561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:29.141615   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:29.177881   73662 cri.go:89] found id: ""
	I0603 12:09:29.177906   73662 logs.go:276] 0 containers: []
	W0603 12:09:29.177916   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:29.177923   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:29.177934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:29.231307   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:29.231338   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:29.248629   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:29.248676   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:29.348230   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:29.348255   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:29.348272   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:29.433016   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:29.433049   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:26.082319   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:28.581095   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.615220   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.112437   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.166628   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.167092   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:31.973677   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:31.988457   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:31.988518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:32.028424   73662 cri.go:89] found id: ""
	I0603 12:09:32.028450   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.028458   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:32.028464   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:32.028518   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:32.069388   73662 cri.go:89] found id: ""
	I0603 12:09:32.069413   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.069421   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:32.069427   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:32.069480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:32.106557   73662 cri.go:89] found id: ""
	I0603 12:09:32.106590   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.106601   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:32.106608   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:32.106677   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:32.142460   73662 cri.go:89] found id: ""
	I0603 12:09:32.142488   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.142499   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:32.142507   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:32.142560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:32.177513   73662 cri.go:89] found id: ""
	I0603 12:09:32.177540   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.177553   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:32.177559   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:32.177620   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:32.212011   73662 cri.go:89] found id: ""
	I0603 12:09:32.212038   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.212048   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:32.212055   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:32.212121   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:32.247928   73662 cri.go:89] found id: ""
	I0603 12:09:32.247953   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.247960   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:32.247965   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:32.248020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:32.287818   73662 cri.go:89] found id: ""
	I0603 12:09:32.287845   73662 logs.go:276] 0 containers: []
	W0603 12:09:32.287852   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:32.287859   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:32.287874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:32.340406   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:32.340439   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:32.355148   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:32.355178   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:32.429270   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:32.429299   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:32.429314   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:32.505607   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:32.505635   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.044751   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:35.067197   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:35.067273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:30.581123   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:32.581201   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:34.581895   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.612660   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.614151   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:36.666568   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:38.666678   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:35.130828   73662 cri.go:89] found id: ""
	I0603 12:09:35.130853   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.130911   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:35.130929   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:35.130987   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:35.168321   73662 cri.go:89] found id: ""
	I0603 12:09:35.168348   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.168355   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:35.168360   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:35.168403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:35.200918   73662 cri.go:89] found id: ""
	I0603 12:09:35.200942   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.200952   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:35.200960   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:35.201020   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:35.235667   73662 cri.go:89] found id: ""
	I0603 12:09:35.235694   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.235705   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:35.235713   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:35.235773   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:35.269565   73662 cri.go:89] found id: ""
	I0603 12:09:35.269600   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.269608   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:35.269613   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:35.269670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:35.304452   73662 cri.go:89] found id: ""
	I0603 12:09:35.304480   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.304488   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:35.304495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:35.304560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:35.337756   73662 cri.go:89] found id: ""
	I0603 12:09:35.337782   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.337789   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:35.337794   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:35.337844   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:35.374738   73662 cri.go:89] found id: ""
	I0603 12:09:35.374762   73662 logs.go:276] 0 containers: []
	W0603 12:09:35.374773   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:35.374804   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:35.374831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:35.389588   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:35.389618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:35.470162   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:35.470184   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:35.470200   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:35.554518   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:35.554560   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:35.594727   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:35.594763   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.154151   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:38.169099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:38.169165   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:38.205410   73662 cri.go:89] found id: ""
	I0603 12:09:38.205437   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.205444   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:38.205450   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:38.205502   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:38.238950   73662 cri.go:89] found id: ""
	I0603 12:09:38.238979   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.238990   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:38.238997   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:38.239072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:38.272117   73662 cri.go:89] found id: ""
	I0603 12:09:38.272146   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.272157   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:38.272164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:38.272232   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:38.306778   73662 cri.go:89] found id: ""
	I0603 12:09:38.306815   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.306826   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:38.306834   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:38.306894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:38.344438   73662 cri.go:89] found id: ""
	I0603 12:09:38.344464   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.344471   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:38.344476   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:38.344528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:38.384347   73662 cri.go:89] found id: ""
	I0603 12:09:38.384373   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.384384   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:38.384392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:38.384440   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:38.424500   73662 cri.go:89] found id: ""
	I0603 12:09:38.424526   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.424536   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:38.424543   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:38.424601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:38.459649   73662 cri.go:89] found id: ""
	I0603 12:09:38.459678   73662 logs.go:276] 0 containers: []
	W0603 12:09:38.459685   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:38.459693   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:38.459705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:38.511193   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:38.511226   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:38.525367   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:38.525394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:38.596534   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:38.596555   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:38.596568   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:38.675204   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:38.675233   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:37.082229   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:39.083400   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.113187   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.612824   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.165676   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:43.166246   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:41.217825   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:41.232019   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:41.232077   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:41.267920   73662 cri.go:89] found id: ""
	I0603 12:09:41.267944   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.267951   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:41.267956   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:41.268002   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:41.306326   73662 cri.go:89] found id: ""
	I0603 12:09:41.306353   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.306364   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:41.306371   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:41.306439   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:41.339922   73662 cri.go:89] found id: ""
	I0603 12:09:41.339950   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.339960   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:41.339968   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:41.340030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:41.374394   73662 cri.go:89] found id: ""
	I0603 12:09:41.374424   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.374432   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:41.374438   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:41.374490   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:41.412699   73662 cri.go:89] found id: ""
	I0603 12:09:41.412725   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.412733   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:41.412738   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:41.412792   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:41.455158   73662 cri.go:89] found id: ""
	I0603 12:09:41.455186   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.455195   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:41.455201   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:41.455250   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:41.493873   73662 cri.go:89] found id: ""
	I0603 12:09:41.493899   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.493907   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:41.493912   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:41.493961   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:41.533128   73662 cri.go:89] found id: ""
	I0603 12:09:41.533157   73662 logs.go:276] 0 containers: []
	W0603 12:09:41.533168   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:41.533179   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:41.533192   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.569504   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:41.569532   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:41.623155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:41.623182   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:41.637320   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:41.637344   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:41.717063   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:41.717080   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:41.717091   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.301694   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:44.317073   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:44.317128   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:44.359170   73662 cri.go:89] found id: ""
	I0603 12:09:44.359220   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.359230   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:44.359239   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:44.359294   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:44.399820   73662 cri.go:89] found id: ""
	I0603 12:09:44.399844   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.399854   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:44.399862   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:44.399928   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:44.439447   73662 cri.go:89] found id: ""
	I0603 12:09:44.439474   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.439481   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:44.439487   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:44.439540   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:44.475880   73662 cri.go:89] found id: ""
	I0603 12:09:44.475906   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.475917   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:44.475922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:44.475980   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:44.511294   73662 cri.go:89] found id: ""
	I0603 12:09:44.511330   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.511341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:44.511348   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:44.511401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:44.547348   73662 cri.go:89] found id: ""
	I0603 12:09:44.547373   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.547380   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:44.547385   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:44.547430   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:44.586452   73662 cri.go:89] found id: ""
	I0603 12:09:44.586476   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.586483   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:44.586488   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:44.586543   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:44.625804   73662 cri.go:89] found id: ""
	I0603 12:09:44.625824   73662 logs.go:276] 0 containers: []
	W0603 12:09:44.625831   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:44.625839   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:44.625848   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:44.680963   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:44.680996   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:44.695920   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:44.695945   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:44.766704   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:44.766735   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:44.766750   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:44.849452   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:44.849484   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:41.581194   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:44.081266   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.613719   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.613834   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:45.166682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.667916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:47.391851   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:47.406886   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:47.406941   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:47.441654   73662 cri.go:89] found id: ""
	I0603 12:09:47.441676   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.441686   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:47.441692   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:47.441739   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:47.475605   73662 cri.go:89] found id: ""
	I0603 12:09:47.475634   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.475644   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:47.475651   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:47.475707   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:47.511558   73662 cri.go:89] found id: ""
	I0603 12:09:47.511582   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.511590   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:47.511595   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:47.511653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:47.545327   73662 cri.go:89] found id: ""
	I0603 12:09:47.545359   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.545370   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:47.545378   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:47.545442   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:47.581846   73662 cri.go:89] found id: ""
	I0603 12:09:47.581875   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.581884   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:47.581892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:47.581953   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:47.618872   73662 cri.go:89] found id: ""
	I0603 12:09:47.618893   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.618901   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:47.618908   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:47.618964   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:47.663659   73662 cri.go:89] found id: ""
	I0603 12:09:47.663689   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.663700   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:47.663708   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:47.663766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:47.697189   73662 cri.go:89] found id: ""
	I0603 12:09:47.697217   73662 logs.go:276] 0 containers: []
	W0603 12:09:47.697228   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:47.697238   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:47.697254   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:47.711787   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:47.711812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:47.784073   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:47.784095   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:47.784106   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:47.866792   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:47.866824   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:47.907650   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:47.907701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:46.081705   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:48.581286   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.115365   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.612108   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.166286   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:52.166751   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.171218   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:50.458815   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:50.473498   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:50.473561   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:50.514762   73662 cri.go:89] found id: ""
	I0603 12:09:50.514788   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.514796   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:50.514801   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:50.514877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:50.548449   73662 cri.go:89] found id: ""
	I0603 12:09:50.548481   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.548492   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:50.548498   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:50.548560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:50.584636   73662 cri.go:89] found id: ""
	I0603 12:09:50.584658   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.584665   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:50.584671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:50.584718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:50.619934   73662 cri.go:89] found id: ""
	I0603 12:09:50.619964   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.619974   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:50.619983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:50.620041   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:50.656062   73662 cri.go:89] found id: ""
	I0603 12:09:50.656093   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.656105   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:50.656117   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:50.656166   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:50.693539   73662 cri.go:89] found id: ""
	I0603 12:09:50.693566   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.693573   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:50.693582   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:50.693637   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:50.727999   73662 cri.go:89] found id: ""
	I0603 12:09:50.728029   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.728049   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:50.728057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:50.728118   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:50.767370   73662 cri.go:89] found id: ""
	I0603 12:09:50.767417   73662 logs.go:276] 0 containers: []
	W0603 12:09:50.767434   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:50.767444   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:50.767460   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:50.844078   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:50.844098   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:50.844111   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:50.922082   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:50.922119   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.964841   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:50.964878   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:51.016783   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:51.016823   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.533274   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:53.547218   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:53.547272   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:53.584537   73662 cri.go:89] found id: ""
	I0603 12:09:53.584561   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.584571   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:53.584578   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:53.584634   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:53.618652   73662 cri.go:89] found id: ""
	I0603 12:09:53.618678   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.618688   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:53.618695   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:53.618749   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:53.654094   73662 cri.go:89] found id: ""
	I0603 12:09:53.654120   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.654127   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:53.654140   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:53.654196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:53.691381   73662 cri.go:89] found id: ""
	I0603 12:09:53.691409   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.691420   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:53.691428   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:53.691493   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:53.728294   73662 cri.go:89] found id: ""
	I0603 12:09:53.728331   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.728341   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:53.728349   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:53.728426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:53.764973   73662 cri.go:89] found id: ""
	I0603 12:09:53.765005   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.765016   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:53.765023   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:53.765087   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:53.803694   73662 cri.go:89] found id: ""
	I0603 12:09:53.803717   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.803724   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:53.803729   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:53.803776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:53.841924   73662 cri.go:89] found id: ""
	I0603 12:09:53.841949   73662 logs.go:276] 0 containers: []
	W0603 12:09:53.841957   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:53.841964   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:53.841982   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:53.895701   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:53.895738   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:53.909498   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:53.909524   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:53.985195   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:53.985218   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:53.985234   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:54.065799   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:54.065831   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:50.581958   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:53.081289   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:55.081589   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:54.612358   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.616081   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.112698   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.667243   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.167672   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:56.606887   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:56.621376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:56.621437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:56.660334   73662 cri.go:89] found id: ""
	I0603 12:09:56.660358   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.660368   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:56.660375   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:56.660434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:56.695706   73662 cri.go:89] found id: ""
	I0603 12:09:56.695734   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.695742   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:56.695747   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:56.695791   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:56.730634   73662 cri.go:89] found id: ""
	I0603 12:09:56.730656   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.730664   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:56.730670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:56.730715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:56.765374   73662 cri.go:89] found id: ""
	I0603 12:09:56.765407   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.765414   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:56.765420   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:56.765467   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:56.801230   73662 cri.go:89] found id: ""
	I0603 12:09:56.801254   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.801262   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:56.801267   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:56.801335   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:56.835988   73662 cri.go:89] found id: ""
	I0603 12:09:56.836015   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.836026   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:56.836034   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:56.836093   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:56.870099   73662 cri.go:89] found id: ""
	I0603 12:09:56.870124   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.870131   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:56.870136   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:56.870183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:56.904755   73662 cri.go:89] found id: ""
	I0603 12:09:56.904780   73662 logs.go:276] 0 containers: []
	W0603 12:09:56.904790   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:56.904801   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:09:56.904812   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:09:56.956824   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:56.956854   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:56.971675   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:56.971702   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:09:57.042337   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:09:57.042359   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:09:57.042375   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.129450   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:09:57.129480   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:09:59.669256   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:09:59.683392   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:09:59.683452   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:09:59.718035   73662 cri.go:89] found id: ""
	I0603 12:09:59.718062   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.718073   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:09:59.718081   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:09:59.718141   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:09:59.756638   73662 cri.go:89] found id: ""
	I0603 12:09:59.756666   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.756678   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:09:59.756686   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:09:59.756751   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:09:59.794710   73662 cri.go:89] found id: ""
	I0603 12:09:59.794753   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.794764   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:09:59.794771   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:09:59.794835   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:09:59.829717   73662 cri.go:89] found id: ""
	I0603 12:09:59.829745   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.829755   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:09:59.829763   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:09:59.829819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:09:59.863959   73662 cri.go:89] found id: ""
	I0603 12:09:59.863996   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.864005   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:09:59.864010   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:09:59.864070   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:09:59.900553   73662 cri.go:89] found id: ""
	I0603 12:09:59.900577   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.900585   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:09:59.900590   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:09:59.900664   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:09:59.935702   73662 cri.go:89] found id: ""
	I0603 12:09:59.935727   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.935735   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:09:59.935741   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:09:59.935800   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:09:59.971017   73662 cri.go:89] found id: ""
	I0603 12:09:59.971064   73662 logs.go:276] 0 containers: []
	W0603 12:09:59.971076   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:09:59.971086   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:09:59.971102   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:09:59.985406   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:09:59.985431   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:00.064341   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:00.064372   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:00.064388   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:09:57.081724   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:09:59.581454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.113236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:03.116142   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:01.667557   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.166825   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:00.152803   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:00.152850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:00.198301   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:00.198341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:02.749662   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:02.762938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:02.762999   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:02.800269   73662 cri.go:89] found id: ""
	I0603 12:10:02.800296   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.800305   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:02.800311   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:02.800373   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:02.841326   73662 cri.go:89] found id: ""
	I0603 12:10:02.841350   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.841357   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:02.841363   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:02.841409   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:02.879309   73662 cri.go:89] found id: ""
	I0603 12:10:02.879343   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.879353   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:02.879361   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:02.879423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:02.919666   73662 cri.go:89] found id: ""
	I0603 12:10:02.919695   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.919707   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:02.919714   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:02.919761   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:02.954790   73662 cri.go:89] found id: ""
	I0603 12:10:02.954814   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.954822   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:02.954827   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:02.954884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:02.994472   73662 cri.go:89] found id: ""
	I0603 12:10:02.994515   73662 logs.go:276] 0 containers: []
	W0603 12:10:02.994527   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:02.994535   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:02.994598   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:03.034482   73662 cri.go:89] found id: ""
	I0603 12:10:03.034509   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.034520   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:03.034526   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:03.034591   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:03.072971   73662 cri.go:89] found id: ""
	I0603 12:10:03.073002   73662 logs.go:276] 0 containers: []
	W0603 12:10:03.073011   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:03.073025   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:03.073043   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:03.088043   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:03.088074   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:03.186799   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:03.186829   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:03.186842   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:03.266685   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:03.266724   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:03.317400   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:03.317433   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:01.582398   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:04.082658   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.613678   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.112518   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:06.167099   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:08.167502   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:05.870335   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:05.884377   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:05.884469   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:05.924617   73662 cri.go:89] found id: ""
	I0603 12:10:05.924647   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.924659   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:05.924667   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:05.924724   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:05.971569   73662 cri.go:89] found id: ""
	I0603 12:10:05.971605   73662 logs.go:276] 0 containers: []
	W0603 12:10:05.971615   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:05.971623   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:05.971683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:06.010190   73662 cri.go:89] found id: ""
	I0603 12:10:06.010211   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.010218   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:06.010223   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:06.010270   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:06.056228   73662 cri.go:89] found id: ""
	I0603 12:10:06.056258   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.056269   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:06.056276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:06.056338   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:06.096139   73662 cri.go:89] found id: ""
	I0603 12:10:06.096171   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.096182   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:06.096192   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:06.096261   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:06.135290   73662 cri.go:89] found id: ""
	I0603 12:10:06.135327   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.135338   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:06.135346   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:06.135412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:06.177281   73662 cri.go:89] found id: ""
	I0603 12:10:06.177311   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.177328   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:06.177335   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:06.177395   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:06.216791   73662 cri.go:89] found id: ""
	I0603 12:10:06.216823   73662 logs.go:276] 0 containers: []
	W0603 12:10:06.216835   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:06.216845   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:06.216874   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:06.272731   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:06.272772   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:06.289080   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:06.289118   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:06.358105   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:06.358134   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:06.358148   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:06.433071   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:06.433107   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:08.974934   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:08.988808   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:08.988883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:09.023595   73662 cri.go:89] found id: ""
	I0603 12:10:09.023620   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.023627   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:09.023633   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:09.023683   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:09.060962   73662 cri.go:89] found id: ""
	I0603 12:10:09.060990   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.061000   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:09.061006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:09.061080   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:09.099923   73662 cri.go:89] found id: ""
	I0603 12:10:09.099952   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.099961   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:09.099970   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:09.100030   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:09.138521   73662 cri.go:89] found id: ""
	I0603 12:10:09.138547   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.138555   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:09.138561   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:09.138614   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:09.178492   73662 cri.go:89] found id: ""
	I0603 12:10:09.178519   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.178529   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:09.178537   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:09.178603   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:09.215779   73662 cri.go:89] found id: ""
	I0603 12:10:09.215812   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.215819   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:09.215832   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:09.215894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:09.250800   73662 cri.go:89] found id: ""
	I0603 12:10:09.250835   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.250845   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:09.250852   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:09.250913   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:09.286742   73662 cri.go:89] found id: ""
	I0603 12:10:09.286773   73662 logs.go:276] 0 containers: []
	W0603 12:10:09.286784   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:09.286794   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:09.286808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:09.341156   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:09.341189   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:09.356237   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:09.356273   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:09.436633   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:09.436654   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:09.436666   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:09.519296   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:09.519336   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:06.581573   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:09.081354   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.113408   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.113838   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:10.168197   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.667631   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.667886   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:12.090458   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:12.105250   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:12.105324   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:12.143229   73662 cri.go:89] found id: ""
	I0603 12:10:12.143257   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.143268   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:12.143276   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:12.143345   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:12.183319   73662 cri.go:89] found id: ""
	I0603 12:10:12.183343   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.183353   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:12.183361   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:12.183421   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:12.221154   73662 cri.go:89] found id: ""
	I0603 12:10:12.221178   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.221186   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:12.221191   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:12.221252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:12.256387   73662 cri.go:89] found id: ""
	I0603 12:10:12.256417   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.256428   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:12.256436   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:12.256492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:12.298777   73662 cri.go:89] found id: ""
	I0603 12:10:12.298807   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.298817   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:12.298825   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:12.298883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:12.337031   73662 cri.go:89] found id: ""
	I0603 12:10:12.337060   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.337070   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:12.337077   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:12.337136   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:12.373729   73662 cri.go:89] found id: ""
	I0603 12:10:12.373759   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.373766   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:12.373772   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:12.373823   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:12.408295   73662 cri.go:89] found id: ""
	I0603 12:10:12.408337   73662 logs.go:276] 0 containers: []
	W0603 12:10:12.408346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:12.408357   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:12.408371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:12.458814   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:12.458844   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:12.471995   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:12.472020   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:12.542342   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:12.542364   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:12.542379   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:12.620295   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:12.620328   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:11.081820   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:13.580873   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:14.613837   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:16.613987   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.614774   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:17.166332   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:19.167726   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:15.162145   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:15.178057   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:15.178110   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:15.217189   73662 cri.go:89] found id: ""
	I0603 12:10:15.217218   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.217228   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:15.217235   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:15.217291   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:15.265380   73662 cri.go:89] found id: ""
	I0603 12:10:15.265419   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.265430   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:15.265438   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:15.265500   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:15.310671   73662 cri.go:89] found id: ""
	I0603 12:10:15.310736   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.310772   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:15.310787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:15.310884   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:15.377888   73662 cri.go:89] found id: ""
	I0603 12:10:15.377914   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.377921   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:15.377928   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:15.377972   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:15.415472   73662 cri.go:89] found id: ""
	I0603 12:10:15.415502   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.415510   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:15.415516   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:15.415563   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:15.450721   73662 cri.go:89] found id: ""
	I0603 12:10:15.450748   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.450755   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:15.450760   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:15.450814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:15.484329   73662 cri.go:89] found id: ""
	I0603 12:10:15.484356   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.484363   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:15.484368   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:15.484426   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:15.516976   73662 cri.go:89] found id: ""
	I0603 12:10:15.517005   73662 logs.go:276] 0 containers: []
	W0603 12:10:15.517015   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:15.517025   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:15.517038   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:15.569023   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:15.569053   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:15.583710   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:15.583737   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:15.656403   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:15.656426   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:15.656438   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:15.745585   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:15.745619   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:18.290608   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:18.305165   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:18.305238   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:18.341905   73662 cri.go:89] found id: ""
	I0603 12:10:18.341929   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.341939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:18.341945   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:18.342001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:18.378313   73662 cri.go:89] found id: ""
	I0603 12:10:18.378341   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.378348   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:18.378354   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:18.378401   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:18.413366   73662 cri.go:89] found id: ""
	I0603 12:10:18.413414   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.413424   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:18.413432   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:18.413492   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:18.448694   73662 cri.go:89] found id: ""
	I0603 12:10:18.448727   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.448738   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:18.448745   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:18.448802   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:18.482640   73662 cri.go:89] found id: ""
	I0603 12:10:18.482678   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.482689   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:18.482696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:18.482757   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:18.520929   73662 cri.go:89] found id: ""
	I0603 12:10:18.520962   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.520975   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:18.520983   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:18.521045   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:18.558678   73662 cri.go:89] found id: ""
	I0603 12:10:18.558712   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.558723   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:18.558730   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:18.558788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:18.597574   73662 cri.go:89] found id: ""
	I0603 12:10:18.597599   73662 logs.go:276] 0 containers: []
	W0603 12:10:18.597609   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:18.597619   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:18.597633   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:18.652569   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:18.652596   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:18.667829   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:18.667861   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:18.740869   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:18.740888   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:18.740899   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:18.822108   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:18.822143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:15.581618   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:18.081181   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.113841   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:23.612530   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.667682   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:24.167351   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:21.363741   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:21.377941   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:21.378011   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:21.414406   73662 cri.go:89] found id: ""
	I0603 12:10:21.414434   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.414446   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:21.414454   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:21.414513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:21.449028   73662 cri.go:89] found id: ""
	I0603 12:10:21.449065   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.449074   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:21.449080   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:21.449126   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:21.483017   73662 cri.go:89] found id: ""
	I0603 12:10:21.483052   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.483064   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:21.483071   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:21.483120   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:21.519195   73662 cri.go:89] found id: ""
	I0603 12:10:21.519227   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.519237   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:21.519245   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:21.519304   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:21.556228   73662 cri.go:89] found id: ""
	I0603 12:10:21.556257   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.556270   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:21.556276   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:21.556337   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:21.594772   73662 cri.go:89] found id: ""
	I0603 12:10:21.594798   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.594808   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:21.594817   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:21.594875   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:21.629808   73662 cri.go:89] found id: ""
	I0603 12:10:21.629830   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.629837   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:21.629843   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:21.629891   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:21.675237   73662 cri.go:89] found id: ""
	I0603 12:10:21.675263   73662 logs.go:276] 0 containers: []
	W0603 12:10:21.675272   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:21.675282   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:21.675295   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:21.730416   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:21.730445   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:21.744442   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:21.744467   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:21.826282   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:21.826308   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:21.826324   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:21.911387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:21.911422   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:24.454912   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:24.469992   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:24.470069   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:24.509462   73662 cri.go:89] found id: ""
	I0603 12:10:24.509501   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.509516   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:24.509523   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:24.509588   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:24.543878   73662 cri.go:89] found id: ""
	I0603 12:10:24.543902   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.543910   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:24.543916   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:24.543969   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:24.582712   73662 cri.go:89] found id: ""
	I0603 12:10:24.582741   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.582752   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:24.582759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:24.582824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:24.620533   73662 cri.go:89] found id: ""
	I0603 12:10:24.620560   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.620571   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:24.620577   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:24.620629   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:24.658750   73662 cri.go:89] found id: ""
	I0603 12:10:24.658774   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.658781   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:24.658787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:24.658830   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:24.697870   73662 cri.go:89] found id: ""
	I0603 12:10:24.697898   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.697914   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:24.697922   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:24.697982   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:24.733557   73662 cri.go:89] found id: ""
	I0603 12:10:24.733583   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.733593   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:24.733601   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:24.733658   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:24.767874   73662 cri.go:89] found id: ""
	I0603 12:10:24.767901   73662 logs.go:276] 0 containers: []
	W0603 12:10:24.767910   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:24.767920   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:24.767934   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:24.821155   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:24.821188   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:24.835506   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:24.835533   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:24.911295   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:24.911317   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:24.911331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:24.998831   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:24.998870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:20.581174   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:22.582071   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:25.081112   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.113580   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.113842   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:26.167517   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:28.666601   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:27.547553   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:27.562219   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:27.562283   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:27.604320   73662 cri.go:89] found id: ""
	I0603 12:10:27.604354   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.604362   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:27.604368   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:27.604431   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:27.645069   73662 cri.go:89] found id: ""
	I0603 12:10:27.645093   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.645100   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:27.645105   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:27.645208   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:27.682961   73662 cri.go:89] found id: ""
	I0603 12:10:27.682984   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.682992   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:27.682997   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:27.683065   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:27.716279   73662 cri.go:89] found id: ""
	I0603 12:10:27.716310   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.716321   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:27.716330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:27.716405   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:27.758347   73662 cri.go:89] found id: ""
	I0603 12:10:27.758380   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.758390   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:27.758397   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:27.758446   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:27.798212   73662 cri.go:89] found id: ""
	I0603 12:10:27.798240   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.798249   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:27.798258   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:27.798318   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:27.831688   73662 cri.go:89] found id: ""
	I0603 12:10:27.831709   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.831716   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:27.831722   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:27.831776   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:27.864395   73662 cri.go:89] found id: ""
	I0603 12:10:27.864423   73662 logs.go:276] 0 containers: []
	W0603 12:10:27.864433   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:27.864444   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:27.864463   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:27.915528   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:27.915556   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:27.929783   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:27.929806   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:28.005168   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:28.005245   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:28.005267   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:28.090748   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:28.090779   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:27.582855   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.081021   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.615472   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.112833   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.668051   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:33.167211   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:30.631148   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:30.645518   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:30.645590   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:30.684016   73662 cri.go:89] found id: ""
	I0603 12:10:30.684044   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.684054   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:30.684062   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:30.684129   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:30.720344   73662 cri.go:89] found id: ""
	I0603 12:10:30.720371   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.720379   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:30.720384   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:30.720437   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:30.754123   73662 cri.go:89] found id: ""
	I0603 12:10:30.754156   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.754167   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:30.754175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:30.754228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:30.788398   73662 cri.go:89] found id: ""
	I0603 12:10:30.788425   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.788436   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:30.788455   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:30.788523   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:30.826122   73662 cri.go:89] found id: ""
	I0603 12:10:30.826149   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.826157   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:30.826163   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:30.826221   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:30.862886   73662 cri.go:89] found id: ""
	I0603 12:10:30.862917   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.862930   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:30.862938   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:30.862995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:30.897587   73662 cri.go:89] found id: ""
	I0603 12:10:30.897616   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.897628   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:30.897635   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:30.897692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:30.936463   73662 cri.go:89] found id: ""
	I0603 12:10:30.936493   73662 logs.go:276] 0 containers: []
	W0603 12:10:30.936510   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:30.936521   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:30.936535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:30.987304   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:30.987341   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:31.001608   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:31.001636   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:31.079366   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:31.079385   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:31.079398   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:31.158814   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:31.158851   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:33.699524   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:33.713194   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:33.713256   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:33.747030   73662 cri.go:89] found id: ""
	I0603 12:10:33.747073   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.747084   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:33.747092   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:33.747151   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:33.781873   73662 cri.go:89] found id: ""
	I0603 12:10:33.781909   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.781920   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:33.781927   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:33.781992   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:33.828337   73662 cri.go:89] found id: ""
	I0603 12:10:33.828366   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.828374   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:33.828380   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:33.828433   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:33.868051   73662 cri.go:89] found id: ""
	I0603 12:10:33.868089   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.868101   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:33.868109   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:33.868168   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:33.913693   73662 cri.go:89] found id: ""
	I0603 12:10:33.913725   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.913736   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:33.913743   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:33.913824   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:33.952082   73662 cri.go:89] found id: ""
	I0603 12:10:33.952111   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.952122   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:33.952129   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:33.952183   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:33.994921   73662 cri.go:89] found id: ""
	I0603 12:10:33.994944   73662 logs.go:276] 0 containers: []
	W0603 12:10:33.994952   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:33.994959   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:33.995008   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:34.033315   73662 cri.go:89] found id: ""
	I0603 12:10:34.033346   73662 logs.go:276] 0 containers: []
	W0603 12:10:34.033357   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:34.033368   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:34.033381   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:34.087719   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:34.087746   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:34.101109   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:34.101134   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:34.180100   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:34.180121   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:34.180135   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:34.255838   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:34.255870   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:32.583080   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.081454   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.113238   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:37.611978   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:35.668549   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:38.166687   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:36.800845   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:36.815775   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:36.815834   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:36.849970   73662 cri.go:89] found id: ""
	I0603 12:10:36.849999   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.850009   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:36.850015   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:36.850063   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:36.886418   73662 cri.go:89] found id: ""
	I0603 12:10:36.886448   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.886456   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:36.886461   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:36.886506   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:36.919671   73662 cri.go:89] found id: ""
	I0603 12:10:36.919696   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.919703   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:36.919710   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:36.919766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:36.954412   73662 cri.go:89] found id: ""
	I0603 12:10:36.954436   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.954446   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:36.954453   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:36.954513   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:36.989805   73662 cri.go:89] found id: ""
	I0603 12:10:36.989836   73662 logs.go:276] 0 containers: []
	W0603 12:10:36.989848   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:36.989856   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:36.989930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.023883   73662 cri.go:89] found id: ""
	I0603 12:10:37.023913   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.023922   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:37.023930   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:37.023995   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:37.058617   73662 cri.go:89] found id: ""
	I0603 12:10:37.058646   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.058654   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:37.058661   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:37.058719   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:37.093143   73662 cri.go:89] found id: ""
	I0603 12:10:37.093167   73662 logs.go:276] 0 containers: []
	W0603 12:10:37.093177   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:37.093192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:37.093208   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:37.133117   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:37.133147   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:37.188143   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:37.188174   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:37.202654   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:37.202687   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:37.276401   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:37.276429   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:37.276443   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:39.855590   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:39.870119   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:39.870189   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:39.907496   73662 cri.go:89] found id: ""
	I0603 12:10:39.907527   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.907537   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:39.907545   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:39.907607   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:39.942745   73662 cri.go:89] found id: ""
	I0603 12:10:39.942774   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.942784   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:39.942791   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:39.942853   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:39.981620   73662 cri.go:89] found id: ""
	I0603 12:10:39.981649   73662 logs.go:276] 0 containers: []
	W0603 12:10:39.981660   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:39.981667   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:39.981718   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:40.020121   73662 cri.go:89] found id: ""
	I0603 12:10:40.020155   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.020167   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:40.020175   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:40.020240   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:40.059547   73662 cri.go:89] found id: ""
	I0603 12:10:40.059580   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.059591   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:40.059598   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:40.059659   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:37.082294   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.581774   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:39.614702   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.112933   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.113960   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.167350   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:42.667457   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:40.097365   73662 cri.go:89] found id: ""
	I0603 12:10:40.097386   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.097393   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:40.097400   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:40.097441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:40.132635   73662 cri.go:89] found id: ""
	I0603 12:10:40.132657   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.132664   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:40.132670   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:40.132725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:40.165849   73662 cri.go:89] found id: ""
	I0603 12:10:40.165875   73662 logs.go:276] 0 containers: []
	W0603 12:10:40.165885   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:40.165895   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:40.165910   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:40.218842   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:40.218871   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:40.232800   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:40.232825   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:40.300026   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:40.300050   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:40.300065   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:40.376985   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:40.377017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:42.916093   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:42.930099   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:42.930157   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:42.965541   73662 cri.go:89] found id: ""
	I0603 12:10:42.965565   73662 logs.go:276] 0 containers: []
	W0603 12:10:42.965575   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:42.965582   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:42.965639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:43.000837   73662 cri.go:89] found id: ""
	I0603 12:10:43.000863   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.000871   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:43.000877   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:43.000930   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:43.036557   73662 cri.go:89] found id: ""
	I0603 12:10:43.036593   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.036605   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:43.036626   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:43.036695   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:43.076479   73662 cri.go:89] found id: ""
	I0603 12:10:43.076507   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.076515   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:43.076521   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:43.076571   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:43.116301   73662 cri.go:89] found id: ""
	I0603 12:10:43.116328   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.116338   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:43.116345   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:43.116393   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:43.150538   73662 cri.go:89] found id: ""
	I0603 12:10:43.150576   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.150587   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:43.150594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:43.150662   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:43.183948   73662 cri.go:89] found id: ""
	I0603 12:10:43.183976   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.183987   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:43.183996   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:43.184048   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:43.217610   73662 cri.go:89] found id: ""
	I0603 12:10:43.217636   73662 logs.go:276] 0 containers: []
	W0603 12:10:43.217643   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:43.217651   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:43.217669   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:43.231630   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:43.231655   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:43.298061   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:43.298079   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:43.298092   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:43.388176   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:43.388212   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:43.426277   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:43.426303   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:42.081320   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:44.083275   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:46.612864   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.613666   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.166933   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:47.666784   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:45.977882   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:45.991655   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:45.991716   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:46.030455   73662 cri.go:89] found id: ""
	I0603 12:10:46.030483   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.030492   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:46.030497   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:46.030542   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:46.065983   73662 cri.go:89] found id: ""
	I0603 12:10:46.066019   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.066028   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:46.066037   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:46.066089   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:46.102788   73662 cri.go:89] found id: ""
	I0603 12:10:46.102816   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.102824   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:46.102830   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:46.102878   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:46.141588   73662 cri.go:89] found id: ""
	I0603 12:10:46.141615   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.141625   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:46.141634   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:46.141686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:46.176109   73662 cri.go:89] found id: ""
	I0603 12:10:46.176133   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.176140   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:46.176146   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:46.176199   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:46.211660   73662 cri.go:89] found id: ""
	I0603 12:10:46.211687   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.211699   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:46.211706   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:46.211766   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:46.247703   73662 cri.go:89] found id: ""
	I0603 12:10:46.247724   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.247731   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:46.247737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:46.247780   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:46.280647   73662 cri.go:89] found id: ""
	I0603 12:10:46.280666   73662 logs.go:276] 0 containers: []
	W0603 12:10:46.280673   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:46.280681   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:46.280692   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:46.358965   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:46.359007   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.402361   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:46.402393   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:46.455346   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:46.455378   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:46.468953   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:46.468979   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:46.543642   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.044028   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:49.059160   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:49.059237   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:49.094538   73662 cri.go:89] found id: ""
	I0603 12:10:49.094562   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.094572   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:49.094579   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:49.094639   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:49.152691   73662 cri.go:89] found id: ""
	I0603 12:10:49.152718   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.152729   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:49.152736   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:49.152794   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:49.190598   73662 cri.go:89] found id: ""
	I0603 12:10:49.190624   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.190632   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:49.190637   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:49.190696   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:49.224713   73662 cri.go:89] found id: ""
	I0603 12:10:49.224735   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.224746   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:49.224752   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:49.224814   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:49.261124   73662 cri.go:89] found id: ""
	I0603 12:10:49.261151   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.261159   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:49.261164   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:49.261218   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:49.297702   73662 cri.go:89] found id: ""
	I0603 12:10:49.297727   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.297734   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:49.297739   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:49.297788   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:49.337168   73662 cri.go:89] found id: ""
	I0603 12:10:49.337194   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.337202   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:49.337208   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:49.337273   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:49.378570   73662 cri.go:89] found id: ""
	I0603 12:10:49.378594   73662 logs.go:276] 0 containers: []
	W0603 12:10:49.378602   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:49.378611   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:49.378623   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:49.431727   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:49.431761   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:49.446359   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:49.446383   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:49.515520   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:49.515539   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:49.515551   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:49.600658   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:49.600697   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:46.580695   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:48.581909   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:51.111776   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.613132   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:50.171016   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.667473   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:52.146131   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:52.159370   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:52.159441   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:52.200541   73662 cri.go:89] found id: ""
	I0603 12:10:52.200571   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.200578   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:52.200583   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:52.200643   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:52.243779   73662 cri.go:89] found id: ""
	I0603 12:10:52.243808   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.243819   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:52.243827   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:52.243885   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:52.278098   73662 cri.go:89] found id: ""
	I0603 12:10:52.278133   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.278142   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:52.278148   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:52.278201   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:52.310844   73662 cri.go:89] found id: ""
	I0603 12:10:52.310873   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.310884   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:52.310892   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:52.310947   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:52.346131   73662 cri.go:89] found id: ""
	I0603 12:10:52.346160   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.346170   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:52.346186   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:52.346252   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:52.383384   73662 cri.go:89] found id: ""
	I0603 12:10:52.383412   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.383420   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:52.383426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:52.383472   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:52.415110   73662 cri.go:89] found id: ""
	I0603 12:10:52.415141   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.415152   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:52.415159   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:52.415228   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:52.449473   73662 cri.go:89] found id: ""
	I0603 12:10:52.449503   73662 logs.go:276] 0 containers: []
	W0603 12:10:52.449511   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:52.449520   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:52.449535   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:52.501303   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:52.501331   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:52.515125   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:52.515155   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:52.587250   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:52.587273   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:52.587289   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:52.677387   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:52.677417   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:51.081196   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:53.081389   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.082150   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.618759   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:58.112642   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.166477   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:57.666759   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.667117   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:55.216868   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:55.231081   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:55.231148   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:55.269023   73662 cri.go:89] found id: ""
	I0603 12:10:55.269060   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.269071   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:55.269078   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:55.269140   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:55.304553   73662 cri.go:89] found id: ""
	I0603 12:10:55.304584   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.304594   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:55.304602   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:55.304653   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:55.337397   73662 cri.go:89] found id: ""
	I0603 12:10:55.337417   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.337426   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:55.337431   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:55.337477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:55.378338   73662 cri.go:89] found id: ""
	I0603 12:10:55.378360   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.378369   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:55.378376   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:55.378434   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:55.419463   73662 cri.go:89] found id: ""
	I0603 12:10:55.419488   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.419506   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:55.419513   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:55.419570   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:55.459581   73662 cri.go:89] found id: ""
	I0603 12:10:55.459609   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.459616   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:55.459622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:55.459686   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:55.496314   73662 cri.go:89] found id: ""
	I0603 12:10:55.496345   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.496355   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:55.496362   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:55.496412   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:55.539728   73662 cri.go:89] found id: ""
	I0603 12:10:55.539756   73662 logs.go:276] 0 containers: []
	W0603 12:10:55.539768   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:55.539779   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:55.539794   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:55.603474   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:55.603502   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:55.668368   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:55.668405   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:55.683121   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:55.683151   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:55.751059   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:55.751096   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:55.751113   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.325699   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:10:58.340070   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:10:58.340142   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:10:58.376205   73662 cri.go:89] found id: ""
	I0603 12:10:58.376240   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.376251   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:10:58.376258   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:10:58.376325   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:10:58.409491   73662 cri.go:89] found id: ""
	I0603 12:10:58.409521   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.409533   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:10:58.409540   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:10:58.409601   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:10:58.442738   73662 cri.go:89] found id: ""
	I0603 12:10:58.442768   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.442779   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:10:58.442787   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:10:58.442849   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:10:58.478390   73662 cri.go:89] found id: ""
	I0603 12:10:58.478417   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.478425   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:10:58.478430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:10:58.478477   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:10:58.513652   73662 cri.go:89] found id: ""
	I0603 12:10:58.513683   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.513694   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:10:58.513702   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:10:58.513762   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:10:58.546490   73662 cri.go:89] found id: ""
	I0603 12:10:58.546513   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.546526   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:10:58.546532   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:10:58.546578   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:10:58.585772   73662 cri.go:89] found id: ""
	I0603 12:10:58.585796   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.585803   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:10:58.585809   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:10:58.585852   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:10:58.623108   73662 cri.go:89] found id: ""
	I0603 12:10:58.623126   73662 logs.go:276] 0 containers: []
	W0603 12:10:58.623133   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:10:58.623140   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:10:58.623150   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:10:58.636866   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:10:58.636892   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:10:58.709496   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:10:58.709537   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:10:58.709549   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:10:58.785370   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:10:58.785401   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:10:58.826456   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:10:58.826482   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:10:57.581002   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:10:59.582082   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:00.114280   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:02.114479   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.668216   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.165821   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:01.379144   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:01.396357   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:01.396423   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:01.459762   73662 cri.go:89] found id: ""
	I0603 12:11:01.459798   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.459809   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:01.459817   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:01.459877   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:01.517986   73662 cri.go:89] found id: ""
	I0603 12:11:01.518019   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.518034   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:01.518048   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:01.518111   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:01.550571   73662 cri.go:89] found id: ""
	I0603 12:11:01.550599   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.550611   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:01.550618   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:01.550670   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:01.585185   73662 cri.go:89] found id: ""
	I0603 12:11:01.585210   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.585221   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:01.585230   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:01.585288   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:01.629706   73662 cri.go:89] found id: ""
	I0603 12:11:01.629734   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.629744   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:01.629751   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:01.629815   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:01.667272   73662 cri.go:89] found id: ""
	I0603 12:11:01.667310   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.667321   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:01.667332   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:01.667390   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:01.703379   73662 cri.go:89] found id: ""
	I0603 12:11:01.703409   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.703419   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:01.703426   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:01.703480   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:01.737944   73662 cri.go:89] found id: ""
	I0603 12:11:01.737972   73662 logs.go:276] 0 containers: []
	W0603 12:11:01.737979   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:01.737987   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:01.737997   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.786485   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:01.786513   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:01.799760   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:01.799783   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:01.875617   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:01.875639   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:01.875651   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:01.963485   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:01.963529   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:04.507299   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:04.522138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:04.522190   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:04.558117   73662 cri.go:89] found id: ""
	I0603 12:11:04.558145   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.558155   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:04.558162   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:04.558222   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:04.595700   73662 cri.go:89] found id: ""
	I0603 12:11:04.595726   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.595737   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:04.595748   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:04.595806   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:04.631793   73662 cri.go:89] found id: ""
	I0603 12:11:04.631823   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.631832   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:04.631839   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:04.631897   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:04.666362   73662 cri.go:89] found id: ""
	I0603 12:11:04.666392   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.666401   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:04.666408   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:04.666471   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:04.701446   73662 cri.go:89] found id: ""
	I0603 12:11:04.701476   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.701487   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:04.701495   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:04.701555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:04.736290   73662 cri.go:89] found id: ""
	I0603 12:11:04.736311   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.736322   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:04.736330   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:04.736389   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:04.769705   73662 cri.go:89] found id: ""
	I0603 12:11:04.769725   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.769732   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:04.769737   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:04.769779   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:04.804875   73662 cri.go:89] found id: ""
	I0603 12:11:04.804898   73662 logs.go:276] 0 containers: []
	W0603 12:11:04.804909   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:04.804927   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:04.804941   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:04.818083   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:04.818112   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:04.890971   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:04.891002   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:04.891017   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:04.970710   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:04.970755   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:05.012247   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:05.012282   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:01.582124   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:03.586504   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:04.612589   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.114578   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:06.166693   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.166916   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:07.567462   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:07.583533   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:07.583628   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:07.621078   73662 cri.go:89] found id: ""
	I0603 12:11:07.621102   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.621110   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:07.621119   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:07.621178   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:07.656011   73662 cri.go:89] found id: ""
	I0603 12:11:07.656040   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.656049   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:07.656056   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:07.656117   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:07.694711   73662 cri.go:89] found id: ""
	I0603 12:11:07.694741   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.694751   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:07.694759   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:07.694816   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:07.731139   73662 cri.go:89] found id: ""
	I0603 12:11:07.731168   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.731178   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:07.731185   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:07.731242   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:07.769734   73662 cri.go:89] found id: ""
	I0603 12:11:07.769763   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.769772   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:07.769780   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:07.769838   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:07.804874   73662 cri.go:89] found id: ""
	I0603 12:11:07.804905   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.804917   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:07.804925   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:07.804984   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:07.843901   73662 cri.go:89] found id: ""
	I0603 12:11:07.843931   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.843941   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:07.843949   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:07.844001   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:07.878763   73662 cri.go:89] found id: ""
	I0603 12:11:07.878792   73662 logs.go:276] 0 containers: []
	W0603 12:11:07.878803   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:07.878814   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:07.878829   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:07.958064   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:07.958095   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:08.000115   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:08.000144   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:08.057652   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:08.057685   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:08.071731   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:08.071759   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:08.148184   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:06.080555   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:08.080661   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.081918   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:09.613756   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.112723   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.114236   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.167662   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:12.666872   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:10.649338   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:10.662870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:10.662945   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:10.698461   73662 cri.go:89] found id: ""
	I0603 12:11:10.698492   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.698500   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:10.698507   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:10.698560   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:10.733955   73662 cri.go:89] found id: ""
	I0603 12:11:10.733987   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.733999   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:10.734006   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:10.734064   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:10.769578   73662 cri.go:89] found id: ""
	I0603 12:11:10.769605   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.769615   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:10.769622   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:10.769682   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:10.803353   73662 cri.go:89] found id: ""
	I0603 12:11:10.803382   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.803393   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:10.803401   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:10.803459   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:10.839791   73662 cri.go:89] found id: ""
	I0603 12:11:10.839819   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.839828   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:10.839835   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:10.839894   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:10.878216   73662 cri.go:89] found id: ""
	I0603 12:11:10.878249   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.878259   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:10.878265   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:10.878333   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:10.912606   73662 cri.go:89] found id: ""
	I0603 12:11:10.912637   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.912645   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:10.912650   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:10.912709   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:10.946669   73662 cri.go:89] found id: ""
	I0603 12:11:10.946699   73662 logs.go:276] 0 containers: []
	W0603 12:11:10.946708   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:10.946718   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:10.946733   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:10.996044   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:10.996077   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:11.009522   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:11.009573   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:11.081623   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:11.081642   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:11.081652   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:11.162795   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:11.162826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:13.704492   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:13.718870   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:13.718939   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:13.757818   73662 cri.go:89] found id: ""
	I0603 12:11:13.757842   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.757850   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:13.757859   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:13.757904   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:13.791959   73662 cri.go:89] found id: ""
	I0603 12:11:13.791989   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.792003   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:13.792010   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:13.792072   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:13.827443   73662 cri.go:89] found id: ""
	I0603 12:11:13.827471   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.827478   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:13.827484   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:13.827538   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:13.862237   73662 cri.go:89] found id: ""
	I0603 12:11:13.862267   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.862277   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:13.862284   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:13.862375   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:13.898873   73662 cri.go:89] found id: ""
	I0603 12:11:13.898906   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.898917   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:13.898924   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:13.898981   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:13.932870   73662 cri.go:89] found id: ""
	I0603 12:11:13.932899   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.932908   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:13.932913   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:13.932960   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:13.968575   73662 cri.go:89] found id: ""
	I0603 12:11:13.968597   73662 logs.go:276] 0 containers: []
	W0603 12:11:13.968605   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:13.968610   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:13.968663   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:14.007252   73662 cri.go:89] found id: ""
	I0603 12:11:14.007281   73662 logs.go:276] 0 containers: []
	W0603 12:11:14.007291   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:14.007302   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:14.007317   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:14.080572   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:14.080595   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:14.080607   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:14.171851   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:14.171886   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:14.212697   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:14.212726   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:14.264925   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:14.264958   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:12.580430   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:14.581407   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.615592   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.111956   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:15.166724   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:17.667851   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:16.780783   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:16.795029   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:16.795127   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:16.833178   73662 cri.go:89] found id: ""
	I0603 12:11:16.833208   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.833218   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:16.833226   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:16.833287   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:16.869318   73662 cri.go:89] found id: ""
	I0603 12:11:16.869349   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.869359   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:16.869366   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:16.869429   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:16.902810   73662 cri.go:89] found id: ""
	I0603 12:11:16.902836   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.902843   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:16.902849   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:16.902893   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:16.936404   73662 cri.go:89] found id: ""
	I0603 12:11:16.936432   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.936442   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:16.936449   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:16.936505   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:16.971056   73662 cri.go:89] found id: ""
	I0603 12:11:16.971083   73662 logs.go:276] 0 containers: []
	W0603 12:11:16.971092   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:16.971097   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:16.971147   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.005389   73662 cri.go:89] found id: ""
	I0603 12:11:17.005416   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.005427   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:17.005435   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:17.005491   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:17.047093   73662 cri.go:89] found id: ""
	I0603 12:11:17.047118   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.047126   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:17.047131   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:17.047187   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:17.093020   73662 cri.go:89] found id: ""
	I0603 12:11:17.093049   73662 logs.go:276] 0 containers: []
	W0603 12:11:17.093057   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:17.093068   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:17.093081   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:17.177970   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:17.178001   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:17.219530   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:17.219563   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:17.272776   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:17.272808   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:17.287573   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:17.287610   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:17.361020   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:19.861599   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:19.874988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:19.875075   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:19.910641   73662 cri.go:89] found id: ""
	I0603 12:11:19.910664   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.910672   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:19.910678   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:19.910738   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:19.947432   73662 cri.go:89] found id: ""
	I0603 12:11:19.947457   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.947465   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:19.947475   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:19.947528   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:19.986254   73662 cri.go:89] found id: ""
	I0603 12:11:19.986284   73662 logs.go:276] 0 containers: []
	W0603 12:11:19.986296   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:19.986303   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:19.986370   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:20.022447   73662 cri.go:89] found id: ""
	I0603 12:11:20.022477   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.022488   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:20.022496   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:20.022555   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:20.056731   73662 cri.go:89] found id: ""
	I0603 12:11:20.056755   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.056763   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:20.056769   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:20.056819   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:17.081290   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:19.581301   73179 pod_ready.go:102] pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:21.113769   73294 pod_ready.go:102] pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:23.106545   73294 pod_ready.go:81] duration metric: took 4m0.000411778s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:23.106575   73294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-tnhbj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:23.106597   73294 pod_ready.go:38] duration metric: took 4m5.898372288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:23.106627   73294 kubeadm.go:591] duration metric: took 4m13.660386139s to restartPrimaryControlPlane
	W0603 12:11:23.106692   73294 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:23.106750   73294 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:20.168291   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:22.667983   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:24.668130   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:20.095511   73662 cri.go:89] found id: ""
	I0603 12:11:20.095537   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.095547   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:20.095552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:20.095595   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:20.130562   73662 cri.go:89] found id: ""
	I0603 12:11:20.130581   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.130589   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:20.130594   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:20.130648   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:20.165231   73662 cri.go:89] found id: ""
	I0603 12:11:20.165257   73662 logs.go:276] 0 containers: []
	W0603 12:11:20.165267   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:20.165276   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:20.165290   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:20.221790   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:20.221826   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:20.237415   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:20.237440   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:20.310615   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:20.310641   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:20.310657   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:20.385667   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:20.385701   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.925911   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:22.938958   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:22.939047   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:22.981898   73662 cri.go:89] found id: ""
	I0603 12:11:22.981928   73662 logs.go:276] 0 containers: []
	W0603 12:11:22.981939   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:22.981954   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:22.982026   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:23.025590   73662 cri.go:89] found id: ""
	I0603 12:11:23.025624   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.025632   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:23.025638   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:23.025691   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:23.072938   73662 cri.go:89] found id: ""
	I0603 12:11:23.072968   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.072980   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:23.072988   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:23.073057   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:23.114546   73662 cri.go:89] found id: ""
	I0603 12:11:23.114573   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.114582   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:23.114589   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:23.114654   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:23.152203   73662 cri.go:89] found id: ""
	I0603 12:11:23.152229   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.152236   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:23.152242   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:23.152289   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:23.204179   73662 cri.go:89] found id: ""
	I0603 12:11:23.204228   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.204240   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:23.204247   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:23.204308   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:23.244217   73662 cri.go:89] found id: ""
	I0603 12:11:23.244246   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.244256   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:23.244264   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:23.244326   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:23.286094   73662 cri.go:89] found id: ""
	I0603 12:11:23.286173   73662 logs.go:276] 0 containers: []
	W0603 12:11:23.286190   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:23.286201   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:23.286215   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:23.357802   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:23.357850   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:23.376808   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:23.376839   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:23.470658   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:23.470691   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:23.470705   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:23.584192   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:23.584241   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:22.075519   73179 pod_ready.go:81] duration metric: took 4m0.000796038s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" ...
	E0603 12:11:22.075561   73179 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jgjzt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:11:22.075598   73179 pod_ready.go:38] duration metric: took 4m12.795532428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:11:22.075626   73179 kubeadm.go:591] duration metric: took 4m22.69078868s to restartPrimaryControlPlane
	W0603 12:11:22.075677   73179 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:22.075720   73179 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:27.170198   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:29.667670   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:26.132511   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:26.150549   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:26.150619   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:26.196791   73662 cri.go:89] found id: ""
	I0603 12:11:26.196817   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.196827   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:26.196834   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:26.196912   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:26.233584   73662 cri.go:89] found id: ""
	I0603 12:11:26.233614   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.233624   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:26.233631   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:26.233692   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:26.272648   73662 cri.go:89] found id: ""
	I0603 12:11:26.272677   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.272688   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:26.272696   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:26.272758   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:26.313775   73662 cri.go:89] found id: ""
	I0603 12:11:26.313806   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.313817   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:26.313824   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:26.313883   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:26.355591   73662 cri.go:89] found id: ""
	I0603 12:11:26.355626   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.355638   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:26.355646   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:26.355711   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:26.406265   73662 cri.go:89] found id: ""
	I0603 12:11:26.406299   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.406306   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:26.406318   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:26.406378   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:26.443279   73662 cri.go:89] found id: ""
	I0603 12:11:26.443321   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.443333   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:26.443340   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:26.443403   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:26.479300   73662 cri.go:89] found id: ""
	I0603 12:11:26.479334   73662 logs.go:276] 0 containers: []
	W0603 12:11:26.479346   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:26.479358   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:26.479371   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:26.531360   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:26.531394   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:26.547939   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:26.547973   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:26.625987   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:26.626016   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:26.626032   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:26.714014   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:26.714054   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:29.267203   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:29.281448   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:29.281522   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:29.315484   73662 cri.go:89] found id: ""
	I0603 12:11:29.315512   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.315519   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:29.315530   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:29.315586   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:29.357054   73662 cri.go:89] found id: ""
	I0603 12:11:29.357084   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.357095   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:29.357103   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:29.357163   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:29.402434   73662 cri.go:89] found id: ""
	I0603 12:11:29.402461   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.402471   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:29.402478   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:29.402520   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:29.437822   73662 cri.go:89] found id: ""
	I0603 12:11:29.437854   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.437865   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:29.437871   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:29.437917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:29.474637   73662 cri.go:89] found id: ""
	I0603 12:11:29.474658   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.474665   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:29.474671   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:29.474725   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:29.508547   73662 cri.go:89] found id: ""
	I0603 12:11:29.508573   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.508580   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:29.508586   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:29.508630   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:29.544524   73662 cri.go:89] found id: ""
	I0603 12:11:29.544553   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.544561   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:29.544567   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:29.544621   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:29.582549   73662 cri.go:89] found id: ""
	I0603 12:11:29.582582   73662 logs.go:276] 0 containers: []
	W0603 12:11:29.582593   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:29.582604   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:29.582618   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:29.641931   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:29.641977   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:29.664918   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:29.664948   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:29.740591   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:29.740615   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:29.740629   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:29.814456   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:29.814489   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.166042   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:34.166283   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:32.359122   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:32.373552   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:11:32.373623   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:11:32.408431   73662 cri.go:89] found id: ""
	I0603 12:11:32.408461   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.408471   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:11:32.408479   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:11:32.408533   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:11:32.444242   73662 cri.go:89] found id: ""
	I0603 12:11:32.444266   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.444273   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:11:32.444279   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:11:32.444323   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:11:32.477205   73662 cri.go:89] found id: ""
	I0603 12:11:32.477230   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.477237   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:11:32.477243   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:11:32.477298   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:11:32.512434   73662 cri.go:89] found id: ""
	I0603 12:11:32.512482   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.512494   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:11:32.512501   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:11:32.512559   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:11:32.545619   73662 cri.go:89] found id: ""
	I0603 12:11:32.545645   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.545655   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:11:32.545662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:11:32.545715   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:11:32.579093   73662 cri.go:89] found id: ""
	I0603 12:11:32.579121   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.579131   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:11:32.579138   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:11:32.579196   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:11:32.616826   73662 cri.go:89] found id: ""
	I0603 12:11:32.616851   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.616858   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:11:32.616864   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:11:32.616917   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:11:32.660083   73662 cri.go:89] found id: ""
	I0603 12:11:32.660113   73662 logs.go:276] 0 containers: []
	W0603 12:11:32.660124   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:11:32.660132   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:11:32.660143   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:11:32.697974   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:11:32.698002   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:11:32.748797   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:11:32.748835   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:11:32.762517   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:11:32.762580   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:11:32.838358   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 12:11:32.838383   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:11:32.838397   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:11:35.419197   73662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:11:35.432481   73662 kubeadm.go:591] duration metric: took 4m4.317900598s to restartPrimaryControlPlane
	W0603 12:11:35.432560   73662 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:11:35.432591   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:11:35.895615   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:35.910673   73662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:35.921333   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:35.931736   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:35.931750   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:35.931783   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:35.940883   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:35.940924   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:35.950780   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:35.959947   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:35.959999   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:35.969824   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.979347   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:35.979393   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:35.988704   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:35.997726   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:35.997785   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:36.007165   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:36.080667   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:11:36.080794   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:36.220642   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:36.220814   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:36.220967   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:36.421569   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:36.423141   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:36.423237   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:36.423328   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:36.423428   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:36.423535   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:36.423630   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:36.423713   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:36.423795   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:36.423880   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:36.423985   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:36.424079   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:36.424140   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:36.424218   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:36.576702   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:36.704239   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:36.981759   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:37.031992   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:37.052994   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:37.054403   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:37.054471   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:37.196201   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:36.168314   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:38.667358   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:37.198112   73662 out.go:204]   - Booting up control plane ...
	I0603 12:11:37.198252   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:37.202872   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:37.203965   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:37.204734   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:37.207204   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:11:41.166509   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:43.168695   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:45.667381   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:48.167362   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:50.167570   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:52.668348   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.671004   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:54.178477   73179 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.102731378s)
	I0603 12:11:54.178554   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:54.194599   73179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:54.204770   73179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:54.215290   73179 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:54.215315   73179 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:54.215355   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:11:54.224420   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:54.224478   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:54.233706   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:11:54.242358   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:54.242399   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:54.251531   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.260911   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:54.260950   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:54.270219   73179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:11:54.279141   73179 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:54.279194   73179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:54.288343   73179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:54.477591   73179 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:11:55.081260   73294 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.974475191s)
	I0603 12:11:55.081350   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:11:55.098545   73294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:11:55.109266   73294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:11:55.118891   73294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:11:55.118917   73294 kubeadm.go:156] found existing configuration files:
	
	I0603 12:11:55.118964   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 12:11:55.128412   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:11:55.128466   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:11:55.137942   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 12:11:55.146937   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:11:55.146986   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:11:55.156388   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.167156   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:11:55.167206   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:11:55.176591   73294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 12:11:55.185483   73294 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:11:55.185530   73294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:11:55.195271   73294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:11:55.251253   73294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:11:55.251344   73294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:11:55.396358   73294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:11:55.396519   73294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:11:55.396681   73294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:11:55.603493   73294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:11:55.605797   73294 out.go:204]   - Generating certificates and keys ...
	I0603 12:11:55.605901   73294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:11:55.605995   73294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:11:55.606143   73294 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:11:55.606253   73294 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:11:55.606357   73294 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:11:55.606440   73294 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:11:55.606539   73294 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:11:55.606623   73294 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:11:55.606738   73294 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:11:55.606844   73294 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:11:55.606907   73294 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:11:55.606990   73294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:11:55.749342   73294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:11:55.918787   73294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:11:56.058383   73294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:11:56.306167   73294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:11:56.365029   73294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:11:56.365722   73294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:11:56.368197   73294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:11:56.369833   73294 out.go:204]   - Booting up control plane ...
	I0603 12:11:56.369950   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:11:56.370081   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:11:56.370175   73294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:11:56.388879   73294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:11:56.391420   73294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:11:56.391490   73294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:11:56.528206   73294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:11:56.528341   73294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:11:57.029861   73294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.458956ms
	I0603 12:11:57.029944   73294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:11:57.165921   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:11:59.168287   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:02.031156   73294 kubeadm.go:309] [api-check] The API server is healthy after 5.001477077s
	I0603 12:12:02.053326   73294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:02.086541   73294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:02.127446   73294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:02.127715   73294 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-196710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:02.138683   73294 kubeadm.go:309] [bootstrap-token] Using token: 20dsgk.zbmo4be5tg5i1a9b
	I0603 12:12:02.140047   73294 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:02.140170   73294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:02.149933   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:02.160136   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:02.168638   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:02.173242   73294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:02.177001   73294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:02.438936   73294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:02.892616   73294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.438400   73294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.440008   73294 kubeadm.go:309] 
	I0603 12:12:03.440093   73294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.440101   73294 kubeadm.go:309] 
	I0603 12:12:03.440183   73294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.440191   73294 kubeadm.go:309] 
	I0603 12:12:03.440217   73294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.440308   73294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.440416   73294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.440438   73294 kubeadm.go:309] 
	I0603 12:12:03.440537   73294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.440559   73294 kubeadm.go:309] 
	I0603 12:12:03.440649   73294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.440659   73294 kubeadm.go:309] 
	I0603 12:12:03.440739   73294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.440813   73294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.440884   73294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.440891   73294 kubeadm.go:309] 
	I0603 12:12:03.440959   73294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.441059   73294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.441077   73294 kubeadm.go:309] 
	I0603 12:12:03.441195   73294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441383   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.441413   73294 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.441422   73294 kubeadm.go:309] 
	I0603 12:12:03.441561   73294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.441580   73294 kubeadm.go:309] 
	I0603 12:12:03.441699   73294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 20dsgk.zbmo4be5tg5i1a9b \
	I0603 12:12:03.441848   73294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.442240   73294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:03.442374   73294 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.442392   73294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.444302   73294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.644388   73179 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:03.644489   73179 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:03.644596   73179 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:03.644742   73179 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:03.644874   73179 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:03.644953   73179 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:03.646392   73179 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:03.646520   73179 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:03.646605   73179 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:03.646715   73179 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:03.646801   73179 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:03.646896   73179 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:03.646980   73179 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:03.647082   73179 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:03.647168   73179 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:03.647266   73179 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:03.647383   73179 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:03.647448   73179 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:03.647527   73179 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:03.647596   73179 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:03.647678   73179 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:03.647753   73179 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:03.647850   73179 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:03.647939   73179 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:03.648064   73179 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:03.648163   73179 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:03.649552   73179 out.go:204]   - Booting up control plane ...
	I0603 12:12:03.649660   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:03.649772   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:03.649884   73179 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:03.650017   73179 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:03.650139   73179 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:03.650211   73179 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:03.650408   73179 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:03.650515   73179 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:03.650591   73179 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002065022s
	I0603 12:12:03.650698   73179 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:03.650789   73179 kubeadm.go:309] [api-check] The API server is healthy after 5.002076943s
	I0603 12:12:03.650915   73179 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:03.651093   73179 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:03.651168   73179 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:03.651414   73179 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-602118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:03.651488   73179 kubeadm.go:309] [bootstrap-token] Using token: shx5vv.etzadsstlalifeo7
	I0603 12:12:03.652942   73179 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:03.653061   73179 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:03.653174   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:03.653347   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:03.653531   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:03.653674   73179 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:03.653781   73179 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:03.653925   73179 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:03.653965   73179 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:03.654004   73179 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:03.654010   73179 kubeadm.go:309] 
	I0603 12:12:03.654057   73179 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:03.654063   73179 kubeadm.go:309] 
	I0603 12:12:03.654125   73179 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:03.654131   73179 kubeadm.go:309] 
	I0603 12:12:03.654151   73179 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:03.654199   73179 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:03.654242   73179 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:03.654250   73179 kubeadm.go:309] 
	I0603 12:12:03.654300   73179 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:03.654306   73179 kubeadm.go:309] 
	I0603 12:12:03.654350   73179 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:03.654356   73179 kubeadm.go:309] 
	I0603 12:12:03.654397   73179 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:03.654467   73179 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:03.654524   73179 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:03.654530   73179 kubeadm.go:309] 
	I0603 12:12:03.654595   73179 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:03.654658   73179 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:03.654664   73179 kubeadm.go:309] 
	I0603 12:12:03.654729   73179 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.654845   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:03.654880   73179 kubeadm.go:309] 	--control-plane 
	I0603 12:12:03.654886   73179 kubeadm.go:309] 
	I0603 12:12:03.655004   73179 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:03.655019   73179 kubeadm.go:309] 
	I0603 12:12:03.655117   73179 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token shx5vv.etzadsstlalifeo7 \
	I0603 12:12:03.655267   73179 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:03.655306   73179 cni.go:84] Creating CNI manager for ""
	I0603 12:12:03.655316   73179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:03.656746   73179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:03.445612   73294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.459114   73294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.479003   73294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.479128   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.479139   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-196710 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=default-k8s-diff-port-196710 minikube.k8s.io/primary=true
	I0603 12:12:03.506970   73294 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.684097   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.185124   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:01.667542   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.669066   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:03.657886   73179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:03.672430   73179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:03.693536   73179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:03.693627   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:03.693658   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-602118 minikube.k8s.io/updated_at=2024_06_03T12_12_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=no-preload-602118 minikube.k8s.io/primary=true
	I0603 12:12:03.730215   73179 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:03.897726   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.398585   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.898543   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:04.684589   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.184999   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.685081   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.185212   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.684565   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.184862   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.684542   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.184516   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.684333   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.184426   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.166490   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.167169   72964 pod_ready.go:102] pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace has status "Ready":"False"
	I0603 12:12:08.661107   72964 pod_ready.go:81] duration metric: took 4m0.000791246s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" ...
	E0603 12:12:08.661143   72964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8jrnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 12:12:08.661161   72964 pod_ready.go:38] duration metric: took 4m12.610770004s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:08.661187   72964 kubeadm.go:591] duration metric: took 4m20.419490743s to restartPrimaryControlPlane
	W0603 12:12:08.661235   72964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 12:12:08.661255   72964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:12:05.398640   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:05.898522   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.397948   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:06.897958   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.397912   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:07.898059   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.398372   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:08.897877   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.397861   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.898541   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:09.684787   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.184277   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.684146   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.184402   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.684199   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.184770   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.684964   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.184228   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.684160   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.184443   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.398126   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:10.898790   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.398275   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:11.897874   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.398040   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:12.898813   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.398175   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:13.897789   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.398202   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:14.898444   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.398430   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.897913   73179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.999563   73179 kubeadm.go:1107] duration metric: took 12.305979901s to wait for elevateKubeSystemPrivileges
	W0603 12:12:15.999608   73179 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:15.999618   73179 kubeadm.go:393] duration metric: took 5m16.666049314s to StartCluster
	I0603 12:12:15.999646   73179 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:15.999745   73179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.002178   73179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.002496   73179 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.003826   73179 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.002629   73179 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.002754   73179 config.go:182] Loaded profile config "no-preload-602118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.005034   73179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-602118"
	I0603 12:12:16.005049   73179 addons.go:69] Setting metrics-server=true in profile "no-preload-602118"
	I0603 12:12:16.005048   73179 addons.go:69] Setting default-storageclass=true in profile "no-preload-602118"
	I0603 12:12:16.005080   73179 addons.go:234] Setting addon metrics-server=true in "no-preload-602118"
	W0603 12:12:16.005095   73179 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.005095   73179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-602118"
	I0603 12:12:16.005121   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005082   73179 addons.go:234] Setting addon storage-provisioner=true in "no-preload-602118"
	W0603 12:12:16.005147   73179 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.005184   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.005039   73179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.005558   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005568   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005562   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.005594   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005613   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.005592   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.025576   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0603 12:12:16.025614   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:12:16.025580   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0603 12:12:16.026031   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026071   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026136   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026549   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026534   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026662   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.026762   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026781   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.026868   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027104   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027174   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.027270   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.027448   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027481   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.027667   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.027693   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.031436   73179 addons.go:234] Setting addon default-storageclass=true in "no-preload-602118"
	W0603 12:12:16.031458   73179 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.031487   73179 host.go:66] Checking if "no-preload-602118" exists ...
	I0603 12:12:16.031838   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.031870   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.043477   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0603 12:12:16.043659   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0603 12:12:16.044102   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044124   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.044746   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044763   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.044767   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.044779   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.045175   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045364   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.045406   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.045571   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.047312   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.047741   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.049538   73179 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.048146   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0603 12:12:16.050862   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.050892   73179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.050897   73179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.050908   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:14.684713   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.184206   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:15.684798   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.184405   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.684720   73294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:16.818407   73294 kubeadm.go:1107] duration metric: took 13.339334124s to wait for elevateKubeSystemPrivileges
	W0603 12:12:16.818450   73294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:12:16.818460   73294 kubeadm.go:393] duration metric: took 5m7.432855804s to StartCluster
	I0603 12:12:16.818480   73294 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.818573   73294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:12:16.821192   73294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:12:16.821483   73294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.60 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:12:16.823082   73294 out.go:177] * Verifying Kubernetes components...
	I0603 12:12:16.821572   73294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:12:16.821670   73294 config.go:182] Loaded profile config "default-k8s-diff-port-196710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:12:16.824703   73294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:12:16.824719   73294 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824760   73294 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824710   73294 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-196710"
	W0603 12:12:16.824772   73294 addons.go:243] addon metrics-server should already be in state true
	I0603 12:12:16.824795   73294 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-196710"
	I0603 12:12:16.824802   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	W0603 12:12:16.824808   73294 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:12:16.824723   73294 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-196710"
	I0603 12:12:16.824843   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.824851   73294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-196710"
	I0603 12:12:16.825222   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825241   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825250   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825264   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.825228   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.825354   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.843187   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0603 12:12:16.843659   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.844379   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.844407   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.844784   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.845314   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.845353   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.845975   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0603 12:12:16.846379   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.846856   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.846875   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.847307   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.847921   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.847944   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.848622   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0603 12:12:16.849007   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.849505   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.849527   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.849888   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.850120   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.853711   73294 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-196710"
	W0603 12:12:16.853732   73294 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:12:16.853758   73294 host.go:66] Checking if "default-k8s-diff-port-196710" exists ...
	I0603 12:12:16.854106   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.854143   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.874485   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0603 12:12:16.874543   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0603 12:12:16.875013   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875431   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.875601   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.875619   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.875983   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.875970   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.876141   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.876153   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.876623   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.877005   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.878149   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.879857   73294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:12:16.881339   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:12:16.881357   73294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:12:16.881384   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.883128   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0603 12:12:16.883690   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.883973   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.884247   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.884263   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.885697   73294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:12:16.052190   73179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.052208   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.052226   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.051450   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.053253   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.053274   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.053684   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.054284   73179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.054309   73179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.054504   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.054885   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.054916   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055640   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.055804   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.055873   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.055952   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.056079   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.056405   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.056431   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.056465   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.056633   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.056879   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.057006   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.072215   73179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0603 12:12:16.072581   73179 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.072913   73179 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.072924   73179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.073189   73179 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.073304   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetState
	I0603 12:12:16.074771   73179 main.go:141] libmachine: (no-preload-602118) Calling .DriverName
	I0603 12:12:16.074941   73179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.074953   73179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.074964   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHHostname
	I0603 12:12:16.077122   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077439   73179 main.go:141] libmachine: (no-preload-602118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:6c:91", ip: ""} in network mk-no-preload-602118: {Iface:virbr2 ExpiryTime:2024-06-03 13:06:33 +0000 UTC Type:0 Mac:52:54:00:ac:6c:91 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:no-preload-602118 Clientid:01:52:54:00:ac:6c:91}
	I0603 12:12:16.077456   73179 main.go:141] libmachine: (no-preload-602118) DBG | domain no-preload-602118 has defined IP address 192.168.50.245 and MAC address 52:54:00:ac:6c:91 in network mk-no-preload-602118
	I0603 12:12:16.077666   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHPort
	I0603 12:12:16.077790   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHKeyPath
	I0603 12:12:16.077893   73179 main.go:141] libmachine: (no-preload-602118) Calling .GetSSHUsername
	I0603 12:12:16.078025   73179 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/no-preload-602118/id_rsa Username:docker}
	I0603 12:12:16.204391   73179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:16.224077   73179 node_ready.go:35] waiting up to 6m0s for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234147   73179 node_ready.go:49] node "no-preload-602118" has status "Ready":"True"
	I0603 12:12:16.234165   73179 node_ready.go:38] duration metric: took 10.052016ms for node "no-preload-602118" to be "Ready" ...
	I0603 12:12:16.234174   73179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.239106   73179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245931   73179 pod_ready.go:92] pod "etcd-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.245951   73179 pod_ready.go:81] duration metric: took 6.818123ms for pod "etcd-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.245959   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251349   73179 pod_ready.go:92] pod "kube-apiserver-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.251368   73179 pod_ready.go:81] duration metric: took 5.403445ms for pod "kube-apiserver-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.251379   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259769   73179 pod_ready.go:92] pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.259787   73179 pod_ready.go:81] duration metric: took 8.400968ms for pod "kube-controller-manager-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.259797   73179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271311   73179 pod_ready.go:92] pod "kube-scheduler-no-preload-602118" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:16.271335   73179 pod_ready.go:81] duration metric: took 11.529418ms for pod "kube-scheduler-no-preload-602118" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:16.271344   73179 pod_ready.go:38] duration metric: took 37.160711ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:16.271361   73179 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:16.271414   73179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:16.299864   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.312742   73179 api_server.go:72] duration metric: took 310.202333ms to wait for apiserver process to appear ...
	I0603 12:12:16.312769   73179 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:16.312789   73179 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0603 12:12:16.332856   73179 api_server.go:279] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0603 12:12:16.334897   73179 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:16.334922   73179 api_server.go:131] duration metric: took 22.144726ms to wait for apiserver health ...
	I0603 12:12:16.334932   73179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:16.354509   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.377512   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:16.377540   73179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:16.428770   73179 system_pods.go:59] 4 kube-system pods found
	I0603 12:12:16.428807   73179 system_pods.go:61] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.428815   73179 system_pods.go:61] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.428820   73179 system_pods.go:61] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.428825   73179 system_pods.go:61] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.428833   73179 system_pods.go:74] duration metric: took 93.893548ms to wait for pod list to return data ...
	I0603 12:12:16.428841   73179 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:16.438619   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:16.438645   73179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:16.495189   73179 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.495218   73179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:16.543072   73179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:16.666123   73179 default_sa.go:45] found service account: "default"
	I0603 12:12:16.666154   73179 default_sa.go:55] duration metric: took 237.305488ms for default service account to be created ...
	I0603 12:12:16.666163   73179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:16.860342   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:16.860387   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860401   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:16.860410   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:16.860419   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:16.860427   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:16.860436   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:16.860443   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:16.860466   73179 retry.go:31] will retry after 306.693518ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.184783   73179 system_pods.go:86] 7 kube-system pods found
	I0603 12:12:17.184828   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184840   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.184852   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.184860   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.184868   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.184880   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.184891   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.184916   73179 retry.go:31] will retry after 329.094905ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.415182   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060631588s)
	I0603 12:12:17.415242   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415255   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415284   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115379891s)
	I0603 12:12:17.415326   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415336   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415714   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415719   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.415725   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415745   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415751   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.415779   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.415793   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415804   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.415753   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.415859   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.416049   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.416063   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.417320   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.417366   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.417391   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.434040   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:17.434072   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:17.434410   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:17.434434   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:17.434445   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:17.527445   73179 system_pods.go:86] 8 kube-system pods found
	I0603 12:12:17.527486   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527499   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.527508   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.527516   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.527524   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.527533   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.527540   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.527551   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.527591   73179 retry.go:31] will retry after 346.068859ms: missing components: kube-dns, kube-proxy
	I0603 12:12:17.908653   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:17.908695   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908706   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:17.908713   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:17.908721   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:17.908728   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:17.908736   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:12:17.908743   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:17.908753   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending
	I0603 12:12:17.908761   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:17.908779   73179 retry.go:31] will retry after 517.651766ms: missing components: kube-dns, kube-proxy
	I0603 12:12:18.135778   73179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.592660253s)
	I0603 12:12:18.135904   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.135945   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.137972   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138016   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138040   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138060   73179 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.138071   73179 main.go:141] libmachine: (no-preload-602118) Calling .Close
	I0603 12:12:18.138394   73179 main.go:141] libmachine: (no-preload-602118) DBG | Closing plugin on server side
	I0603 12:12:18.138435   73179 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.138452   73179 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.138467   73179 addons.go:475] Verifying addon metrics-server=true in "no-preload-602118"
	I0603 12:12:18.139950   73179 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:16.887014   73294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:16.887031   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:12:16.887059   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.884952   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.885388   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887151   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.887173   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.887719   73294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:12:16.887741   73294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:12:16.887932   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.888207   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.888429   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.889197   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.891158   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891613   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.891639   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.891801   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.891979   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.892107   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.892220   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:16.909637   73294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0603 12:12:16.910191   73294 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:12:16.910809   73294 main.go:141] libmachine: Using API Version  1
	I0603 12:12:16.910836   73294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:12:16.911344   73294 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:12:16.911542   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetState
	I0603 12:12:16.913489   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .DriverName
	I0603 12:12:16.913704   73294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:16.913718   73294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:12:16.913735   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHHostname
	I0603 12:12:16.917538   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.917994   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:61:49", ip: ""} in network mk-default-k8s-diff-port-196710: {Iface:virbr4 ExpiryTime:2024-06-03 12:58:47 +0000 UTC Type:0 Mac:52:54:00:9c:61:49 Iaid: IPaddr:192.168.61.60 Prefix:24 Hostname:default-k8s-diff-port-196710 Clientid:01:52:54:00:9c:61:49}
	I0603 12:12:16.918020   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | domain default-k8s-diff-port-196710 has defined IP address 192.168.61.60 and MAC address 52:54:00:9c:61:49 in network mk-default-k8s-diff-port-196710
	I0603 12:12:16.918116   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHPort
	I0603 12:12:16.918243   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHKeyPath
	I0603 12:12:16.918349   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .GetSSHUsername
	I0603 12:12:16.918445   73294 sshutil.go:53] new ssh client: &{IP:192.168.61.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/default-k8s-diff-port-196710/id_rsa Username:docker}
	I0603 12:12:17.046824   73294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:12:17.064066   73294 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084082   73294 node_ready.go:49] node "default-k8s-diff-port-196710" has status "Ready":"True"
	I0603 12:12:17.084108   73294 node_ready.go:38] duration metric: took 19.978467ms for node "default-k8s-diff-port-196710" to be "Ready" ...
	I0603 12:12:17.084116   73294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:17.095774   73294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:17.168174   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:12:17.168200   73294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:12:17.200793   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:12:17.203132   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:12:17.245827   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:12:17.245855   73294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:12:17.310865   73294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:17.310894   73294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:12:17.449447   73294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:12:18.385411   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184578024s)
	I0603 12:12:18.385465   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182295951s)
	I0603 12:12:18.385505   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385520   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385470   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.385562   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.385878   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385905   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.385954   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.385971   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.385980   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386009   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386026   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.386035   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.386043   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386094   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.386336   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386374   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.386425   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.386460   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.387994   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.388012   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423011   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.423058   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.423412   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.423433   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.423473   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.697521   73294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.24802602s)
	I0603 12:12:18.697564   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.697575   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.697960   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.697982   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698043   73294 main.go:141] libmachine: Making call to close driver server
	I0603 12:12:18.698061   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) Calling .Close
	I0603 12:12:18.698312   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.698391   73294 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:12:18.698408   73294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:12:18.698425   73294 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-196710"
	I0603 12:12:18.700421   73294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:12:18.698680   73294 main.go:141] libmachine: (default-k8s-diff-port-196710) DBG | Closing plugin on server side
	I0603 12:12:18.701834   73294 addons.go:510] duration metric: took 1.880261237s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:19.125961   73294 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.125993   73294 pod_ready.go:81] duration metric: took 2.03019096s for pod "coredns-7db6d8ff4d-fvgqr" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.126008   73294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142691   73294 pod_ready.go:92] pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.142711   73294 pod_ready.go:81] duration metric: took 16.694827ms for pod "etcd-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.142721   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166768   73294 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.166793   73294 pod_ready.go:81] duration metric: took 24.064572ms for pod "kube-apiserver-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.166806   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177902   73294 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.177917   73294 pod_ready.go:81] duration metric: took 11.103943ms for pod "kube-controller-manager-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.177926   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191217   73294 pod_ready.go:92] pod "kube-proxy-j4gzg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.191242   73294 pod_ready.go:81] duration metric: took 13.306857ms for pod "kube-proxy-j4gzg" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.191255   73294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499792   73294 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace has status "Ready":"True"
	I0603 12:12:19.499815   73294 pod_ready.go:81] duration metric: took 308.552918ms for pod "kube-scheduler-default-k8s-diff-port-196710" in "kube-system" namespace to be "Ready" ...
	I0603 12:12:19.499823   73294 pod_ready.go:38] duration metric: took 2.415698619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:12:19.499837   73294 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:12:19.499881   73294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:12:19.516655   73294 api_server.go:72] duration metric: took 2.695130179s to wait for apiserver process to appear ...
	I0603 12:12:19.516686   73294 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:12:19.516707   73294 api_server.go:253] Checking apiserver healthz at https://192.168.61.60:8444/healthz ...
	I0603 12:12:19.521037   73294 api_server.go:279] https://192.168.61.60:8444/healthz returned 200:
	ok
	I0603 12:12:19.521988   73294 api_server.go:141] control plane version: v1.30.1
	I0603 12:12:19.522006   73294 api_server.go:131] duration metric: took 5.313149ms to wait for apiserver health ...
	I0603 12:12:19.522015   73294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:12:18.141333   73179 addons.go:510] duration metric: took 2.138708426s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:12:18.445201   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.445243   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445255   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.445266   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.445275   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.445282   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.445289   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.445296   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.445309   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.445318   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.445347   73179 retry.go:31] will retry after 493.36636ms: missing components: kube-dns
	I0603 12:12:18.950981   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:18.951013   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951022   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:12:18.951028   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:18.951033   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:18.951071   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:18.951079   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:18.951085   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:18.951093   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:18.951106   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:18.951123   73179 retry.go:31] will retry after 784.878622ms: missing components: kube-dns
	I0603 12:12:19.743268   73179 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:19.743302   73179 system_pods.go:89] "coredns-7db6d8ff4d-5gmj5" [474da426-9414-4a30-8b19-14e555e192de] Running
	I0603 12:12:19.743310   73179 system_pods.go:89] "coredns-7db6d8ff4d-dwptw" [7a0437fe-8e83-4acc-a92a-af29bf06db93] Running
	I0603 12:12:19.743323   73179 system_pods.go:89] "etcd-no-preload-602118" [d66d136a-b6c8-411c-b820-3e80f773accf] Running
	I0603 12:12:19.743330   73179 system_pods.go:89] "kube-apiserver-no-preload-602118" [299b92ab-ea99-45f5-b55d-b66a06daeaeb] Running
	I0603 12:12:19.743337   73179 system_pods.go:89] "kube-controller-manager-no-preload-602118" [16fd06ad-ff2d-4392-b453-9a8ed782b581] Running
	I0603 12:12:19.743343   73179 system_pods.go:89] "kube-proxy-tfxkl" [d6502635-478f-443c-8186-ab0616fcf4ac] Running
	I0603 12:12:19.743349   73179 system_pods.go:89] "kube-scheduler-no-preload-602118" [71ebde21-9840-43f5-b0a3-424e1fd2000e] Running
	I0603 12:12:19.743365   73179 system_pods.go:89] "metrics-server-569cc877fc-zpzbw" [b28cb265-532b-41ea-a242-001a85174a35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.743376   73179 system_pods.go:89] "storage-provisioner" [9d9e7c2b-91a9-4394-8a08-a2c076d4b42d] Running
	I0603 12:12:19.743388   73179 system_pods.go:126] duration metric: took 3.077217613s to wait for k8s-apps to be running ...
	I0603 12:12:19.743399   73179 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:19.743440   73179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:19.759127   73179 system_svc.go:56] duration metric: took 15.720008ms WaitForService to wait for kubelet
	I0603 12:12:19.759152   73179 kubeadm.go:576] duration metric: took 3.756617312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:19.759177   73179 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:19.761858   73179 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:19.761876   73179 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:19.761885   73179 node_conditions.go:105] duration metric: took 2.703518ms to run NodePressure ...
	I0603 12:12:19.761894   73179 start.go:240] waiting for startup goroutines ...
	I0603 12:12:19.761901   73179 start.go:245] waiting for cluster config update ...
	I0603 12:12:19.761910   73179 start.go:254] writing updated cluster config ...
	I0603 12:12:19.762150   73179 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:19.808158   73179 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:19.810271   73179 out.go:177] * Done! kubectl is now configured to use "no-preload-602118" cluster and "default" namespace by default
	I0603 12:12:17.205144   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:12:17.215420   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:17.215687   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:19.703391   73294 system_pods.go:59] 9 kube-system pods found
	I0603 12:12:19.703422   73294 system_pods.go:61] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:19.703428   73294 system_pods.go:61] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:19.703434   73294 system_pods.go:61] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:19.703439   73294 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:19.703444   73294 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:19.703448   73294 system_pods.go:61] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:19.703453   73294 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:19.703461   73294 system_pods.go:61] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:19.703469   73294 system_pods.go:61] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:12:19.703483   73294 system_pods.go:74] duration metric: took 181.460766ms to wait for pod list to return data ...
	I0603 12:12:19.703494   73294 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:12:19.899579   73294 default_sa.go:45] found service account: "default"
	I0603 12:12:19.899607   73294 default_sa.go:55] duration metric: took 196.097132ms for default service account to be created ...
	I0603 12:12:19.899617   73294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:12:20.104618   73294 system_pods.go:86] 9 kube-system pods found
	I0603 12:12:20.104648   73294 system_pods.go:89] "coredns-7db6d8ff4d-fvgqr" [c908a302-8c40-46aa-9e98-92baa297a7ed] Running
	I0603 12:12:20.104656   73294 system_pods.go:89] "coredns-7db6d8ff4d-pbndv" [91d83622-9883-407e-b0f4-eb2d18cd2483] Running
	I0603 12:12:20.104662   73294 system_pods.go:89] "etcd-default-k8s-diff-port-196710" [29eaf8a6-0759-4f27-9b6e-55beeba8f955] Running
	I0603 12:12:20.104669   73294 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-196710" [7bfa3724-0917-40be-89fe-fe5c67f4fd45] Running
	I0603 12:12:20.104676   73294 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-196710" [50e0af3b-d47c-4113-be78-9cf18060b505] Running
	I0603 12:12:20.104682   73294 system_pods.go:89] "kube-proxy-j4gzg" [2e603f37-93e0-429d-97b8-e9b997c26101] Running
	I0603 12:12:20.104690   73294 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-196710" [e50842a0-71ed-4c9e-811e-9b6bda31dfd0] Running
	I0603 12:12:20.104704   73294 system_pods.go:89] "metrics-server-569cc877fc-lxvbp" [36c7a3c5-6b64-42d2-93ed-2f6cf8234a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:12:20.104716   73294 system_pods.go:89] "storage-provisioner" [8bc80b69-d8f9-4d6a-9bf4-4a41d875a735] Running
	I0603 12:12:20.104733   73294 system_pods.go:126] duration metric: took 205.107424ms to wait for k8s-apps to be running ...
	I0603 12:12:20.104746   73294 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:12:20.104794   73294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:20.120345   73294 system_svc.go:56] duration metric: took 15.592236ms WaitForService to wait for kubelet
	I0603 12:12:20.120374   73294 kubeadm.go:576] duration metric: took 3.298854629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:12:20.120398   73294 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:12:20.299539   73294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:12:20.299565   73294 node_conditions.go:123] node cpu capacity is 2
	I0603 12:12:20.299579   73294 node_conditions.go:105] duration metric: took 179.17433ms to run NodePressure ...
	I0603 12:12:20.299593   73294 start.go:240] waiting for startup goroutines ...
	I0603 12:12:20.299602   73294 start.go:245] waiting for cluster config update ...
	I0603 12:12:20.299613   73294 start.go:254] writing updated cluster config ...
	I0603 12:12:20.299896   73294 ssh_runner.go:195] Run: rm -f paused
	I0603 12:12:20.351961   73294 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:12:20.354040   73294 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-196710" cluster and "default" namespace by default
	I0603 12:12:22.215864   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:22.216210   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:32.215921   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:32.216130   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:40.270116   72964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.60882832s)
	I0603 12:12:40.270214   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:12:40.288350   72964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:12:40.298477   72964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:12:40.308047   72964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:12:40.308063   72964 kubeadm.go:156] found existing configuration files:
	
	I0603 12:12:40.308095   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:12:40.317173   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:12:40.317221   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:12:40.326431   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:12:40.335372   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:12:40.335421   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:12:40.345520   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.354836   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:12:40.354881   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:12:40.364667   72964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:12:40.375714   72964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:12:40.375768   72964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:12:40.387249   72964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:12:40.587569   72964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:12:49.228482   72964 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:12:49.228556   72964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:12:49.228654   72964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:12:49.228817   72964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:12:49.228965   72964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:12:49.229056   72964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:12:49.230616   72964 out.go:204]   - Generating certificates and keys ...
	I0603 12:12:49.230705   72964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:12:49.230778   72964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:12:49.230884   72964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:12:49.230943   72964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:12:49.231001   72964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:12:49.231071   72964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:12:49.231302   72964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:12:49.231400   72964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:12:49.231487   72964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:12:49.231595   72964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:12:49.231645   72964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:12:49.231731   72964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:12:49.231842   72964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:12:49.231930   72964 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:12:49.232009   72964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:12:49.232105   72964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:12:49.232188   72964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:12:49.232305   72964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:12:49.232392   72964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:12:49.234435   72964 out.go:204]   - Booting up control plane ...
	I0603 12:12:49.234513   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:12:49.234592   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:12:49.234680   72964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:12:49.234803   72964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:12:49.234936   72964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:12:49.235006   72964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:12:49.235182   72964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:12:49.235283   72964 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:12:49.235361   72964 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.484209ms
	I0603 12:12:49.235428   72964 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:12:49.235507   72964 kubeadm.go:309] [api-check] The API server is healthy after 5.001411221s
	I0603 12:12:49.235621   72964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:12:49.235730   72964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:12:49.235778   72964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:12:49.235941   72964 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-725022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:12:49.236026   72964 kubeadm.go:309] [bootstrap-token] Using token: 0tfgxu.iied44jkidnxw3ef
	I0603 12:12:49.237200   72964 out.go:204]   - Configuring RBAC rules ...
	I0603 12:12:49.237290   72964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:12:49.237369   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:12:49.237497   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:12:49.237671   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:12:49.237782   72964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:12:49.237879   72964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:12:49.238007   72964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:12:49.238092   72964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:12:49.238156   72964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:12:49.238166   72964 kubeadm.go:309] 
	I0603 12:12:49.238242   72964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:12:49.238250   72964 kubeadm.go:309] 
	I0603 12:12:49.238351   72964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:12:49.238359   72964 kubeadm.go:309] 
	I0603 12:12:49.238392   72964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:12:49.238472   72964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:12:49.238549   72964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:12:49.238558   72964 kubeadm.go:309] 
	I0603 12:12:49.238641   72964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:12:49.238649   72964 kubeadm.go:309] 
	I0603 12:12:49.238722   72964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:12:49.238737   72964 kubeadm.go:309] 
	I0603 12:12:49.238810   72964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:12:49.238874   72964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:12:49.238931   72964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:12:49.238937   72964 kubeadm.go:309] 
	I0603 12:12:49.239007   72964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:12:49.239103   72964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:12:49.239112   72964 kubeadm.go:309] 
	I0603 12:12:49.239179   72964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239305   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b \
	I0603 12:12:49.239341   72964 kubeadm.go:309] 	--control-plane 
	I0603 12:12:49.239355   72964 kubeadm.go:309] 
	I0603 12:12:49.239457   72964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:12:49.239466   72964 kubeadm.go:309] 
	I0603 12:12:49.239574   72964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0tfgxu.iied44jkidnxw3ef \
	I0603 12:12:49.239677   72964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:efe6cbdd58a590fb6c3b56a05a1648145bfb13b8a8cd2383ea34b710fa987e0b 
	I0603 12:12:49.239688   72964 cni.go:84] Creating CNI manager for ""
	I0603 12:12:49.239694   72964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:12:49.241096   72964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:12:49.242158   72964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:12:49.253535   72964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:12:49.272592   72964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:12:49.272655   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.272699   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-725022 minikube.k8s.io/updated_at=2024_06_03T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=embed-certs-725022 minikube.k8s.io/primary=true
	I0603 12:12:49.301181   72964 ops.go:34] apiserver oom_adj: -16
	I0603 12:12:49.473931   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:49.974552   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.474107   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:50.974508   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.474202   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:51.974903   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.474722   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.973981   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.473979   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:53.974372   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:54.474057   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:52.215684   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:12:52.215951   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:12:54.974299   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.474704   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:55.973998   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.474351   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:56.974942   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.474651   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:57.974575   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.474054   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:58.974928   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.474724   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:12:59.974538   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.474341   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:00.974134   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.474970   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:01.974549   72964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:13:02.071778   72964 kubeadm.go:1107] duration metric: took 12.799179684s to wait for elevateKubeSystemPrivileges
	W0603 12:13:02.071819   72964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:13:02.071826   72964 kubeadm.go:393] duration metric: took 5m13.883244188s to StartCluster
	I0603 12:13:02.071847   72964 settings.go:142] acquiring lock: {Name:mkda1bdbbfe91266270f1d999e6d56fc2830d6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.071926   72964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 12:13:02.073849   72964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/kubeconfig: {Name:mk2f45bf5da44c5570e14124ae482cac309d97b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:13:02.074094   72964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:13:02.075473   72964 out.go:177] * Verifying Kubernetes components...
	I0603 12:13:02.074201   72964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:13:02.074273   72964 config.go:182] Loaded profile config "embed-certs-725022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:13:02.076687   72964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:13:02.076702   72964 addons.go:69] Setting default-storageclass=true in profile "embed-certs-725022"
	I0603 12:13:02.076709   72964 addons.go:69] Setting metrics-server=true in profile "embed-certs-725022"
	I0603 12:13:02.076735   72964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-725022"
	I0603 12:13:02.076739   72964 addons.go:234] Setting addon metrics-server=true in "embed-certs-725022"
	W0603 12:13:02.076747   72964 addons.go:243] addon metrics-server should already be in state true
	I0603 12:13:02.076779   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077065   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077105   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.077123   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077144   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.076690   72964 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-725022"
	I0603 12:13:02.077321   72964 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-725022"
	W0603 12:13:02.077330   72964 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:13:02.077353   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.077701   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.077727   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.093285   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0603 12:13:02.093594   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0603 12:13:02.093714   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094085   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.094294   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094315   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094587   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.094609   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.094689   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.094950   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.095244   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095268   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.095454   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.095491   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.096441   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0603 12:13:02.097030   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.097568   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.097590   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.097931   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.098114   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.101980   72964 addons.go:234] Setting addon default-storageclass=true in "embed-certs-725022"
	W0603 12:13:02.102004   72964 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:13:02.102030   72964 host.go:66] Checking if "embed-certs-725022" exists ...
	I0603 12:13:02.102405   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.102443   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.110825   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0603 12:13:02.111295   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.111721   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.111743   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.112109   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.112287   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.112969   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0603 12:13:02.113391   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.113883   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.113898   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.113960   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.115733   72964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:13:02.114328   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.116913   72964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.116925   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:13:02.116937   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.117042   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.119310   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.119549   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0603 12:13:02.120720   72964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 12:13:02.119998   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.120276   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.122038   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:13:02.122054   72964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:13:02.122072   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.120815   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.122134   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.120873   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.121231   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.122186   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.122623   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.122637   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.122823   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.123306   72964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:13:02.123365   72964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:13:02.123751   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.125086   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125450   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.125474   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.125627   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.125863   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.126050   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.126199   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.140680   72964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0603 12:13:02.141121   72964 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:13:02.141624   72964 main.go:141] libmachine: Using API Version  1
	I0603 12:13:02.141649   72964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:13:02.142002   72964 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:13:02.142377   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetState
	I0603 12:13:02.144249   72964 main.go:141] libmachine: (embed-certs-725022) Calling .DriverName
	I0603 12:13:02.144453   72964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.144469   72964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:13:02.144486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHHostname
	I0603 12:13:02.147627   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148109   72964 main.go:141] libmachine: (embed-certs-725022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:41:8c", ip: ""} in network mk-embed-certs-725022: {Iface:virbr3 ExpiryTime:2024-06-03 12:58:10 +0000 UTC Type:0 Mac:52:54:00:ba:41:8c Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:embed-certs-725022 Clientid:01:52:54:00:ba:41:8c}
	I0603 12:13:02.148129   72964 main.go:141] libmachine: (embed-certs-725022) DBG | domain embed-certs-725022 has defined IP address 192.168.72.245 and MAC address 52:54:00:ba:41:8c in network mk-embed-certs-725022
	I0603 12:13:02.148304   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHPort
	I0603 12:13:02.148486   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHKeyPath
	I0603 12:13:02.148604   72964 main.go:141] libmachine: (embed-certs-725022) Calling .GetSSHUsername
	I0603 12:13:02.148741   72964 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/embed-certs-725022/id_rsa Username:docker}
	I0603 12:13:02.304095   72964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:13:02.338638   72964 node_ready.go:35] waiting up to 6m0s for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347843   72964 node_ready.go:49] node "embed-certs-725022" has status "Ready":"True"
	I0603 12:13:02.347872   72964 node_ready.go:38] duration metric: took 9.197667ms for node "embed-certs-725022" to be "Ready" ...
	I0603 12:13:02.347885   72964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:02.353074   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:02.437841   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:13:02.477856   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:13:02.477876   72964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 12:13:02.487138   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:13:02.530568   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:13:02.530591   72964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:13:02.606906   72964 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:02.606933   72964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:13:02.708268   72964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:13:03.372809   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372886   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.372924   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.372982   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373369   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373457   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373472   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373480   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373412   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373510   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.373522   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.373533   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.373417   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373431   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.373858   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.373873   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374065   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.374087   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.374093   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.374168   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.404799   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.404825   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.405101   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.405101   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.405125   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.855630   72964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147319188s)
	I0603 12:13:03.855683   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.855700   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856046   72964 main.go:141] libmachine: (embed-certs-725022) DBG | Closing plugin on server side
	I0603 12:13:03.856085   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856099   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856108   72964 main.go:141] libmachine: Making call to close driver server
	I0603 12:13:03.856119   72964 main.go:141] libmachine: (embed-certs-725022) Calling .Close
	I0603 12:13:03.856408   72964 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:13:03.856426   72964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:13:03.856436   72964 addons.go:475] Verifying addon metrics-server=true in "embed-certs-725022"
	I0603 12:13:03.858229   72964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 12:13:03.859384   72964 addons.go:510] duration metric: took 1.785186744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 12:13:04.360708   72964 pod_ready.go:102] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:13:04.860041   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.860064   72964 pod_ready.go:81] duration metric: took 2.506957346s for pod "coredns-7db6d8ff4d-4gbj2" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.860077   72964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864947   72964 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.864967   72964 pod_ready.go:81] duration metric: took 4.883476ms for pod "coredns-7db6d8ff4d-x9fw5" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.864975   72964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.869979   72964 pod_ready.go:92] pod "etcd-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.870000   72964 pod_ready.go:81] duration metric: took 5.018776ms for pod "etcd-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.870012   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875292   72964 pod_ready.go:92] pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.875309   72964 pod_ready.go:81] duration metric: took 5.289101ms for pod "kube-apiserver-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.875317   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883604   72964 pod_ready.go:92] pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:04.883619   72964 pod_ready.go:81] duration metric: took 8.297056ms for pod "kube-controller-manager-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:04.883627   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.257971   72964 pod_ready.go:92] pod "kube-proxy-7qp6h" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.257994   72964 pod_ready.go:81] duration metric: took 374.360354ms for pod "kube-proxy-7qp6h" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.258003   72964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657811   72964 pod_ready.go:92] pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace has status "Ready":"True"
	I0603 12:13:05.657838   72964 pod_ready.go:81] duration metric: took 399.828323ms for pod "kube-scheduler-embed-certs-725022" in "kube-system" namespace to be "Ready" ...
	I0603 12:13:05.657849   72964 pod_ready.go:38] duration metric: took 3.309954137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:13:05.657866   72964 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:13:05.657920   72964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:13:05.673837   72964 api_server.go:72] duration metric: took 3.599705436s to wait for apiserver process to appear ...
	I0603 12:13:05.673858   72964 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:13:05.673876   72964 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0603 12:13:05.679549   72964 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0603 12:13:05.680688   72964 api_server.go:141] control plane version: v1.30.1
	I0603 12:13:05.680709   72964 api_server.go:131] duration metric: took 6.844232ms to wait for apiserver health ...
	I0603 12:13:05.680717   72964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:13:05.861416   72964 system_pods.go:59] 9 kube-system pods found
	I0603 12:13:05.861452   72964 system_pods.go:61] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:05.861459   72964 system_pods.go:61] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:05.861469   72964 system_pods.go:61] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:05.861475   72964 system_pods.go:61] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:05.861479   72964 system_pods.go:61] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:05.861483   72964 system_pods.go:61] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:05.861489   72964 system_pods.go:61] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:05.861497   72964 system_pods.go:61] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:05.861504   72964 system_pods.go:61] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:05.861515   72964 system_pods.go:74] duration metric: took 180.791789ms to wait for pod list to return data ...
	I0603 12:13:05.861526   72964 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:13:06.058059   72964 default_sa.go:45] found service account: "default"
	I0603 12:13:06.058088   72964 default_sa.go:55] duration metric: took 196.551592ms for default service account to be created ...
	I0603 12:13:06.058100   72964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:13:06.261793   72964 system_pods.go:86] 9 kube-system pods found
	I0603 12:13:06.261828   72964 system_pods.go:89] "coredns-7db6d8ff4d-4gbj2" [0e46c731-84e4-4cb2-8125-2b61c10916a3] Running
	I0603 12:13:06.261835   72964 system_pods.go:89] "coredns-7db6d8ff4d-x9fw5" [1ed6c0e0-2d13-410f-bdf1-6620fb2503ed] Running
	I0603 12:13:06.261840   72964 system_pods.go:89] "etcd-embed-certs-725022" [7c8767c0-ca82-495c-92fa-759b698ebd0f] Running
	I0603 12:13:06.261846   72964 system_pods.go:89] "kube-apiserver-embed-certs-725022" [fe019ffc-5b0c-4271-a9dd-830262d1edd9] Running
	I0603 12:13:06.261853   72964 system_pods.go:89] "kube-controller-manager-embed-certs-725022" [8bde2240-7021-4ab7-9e51-2a7b921c4bf1] Running
	I0603 12:13:06.261860   72964 system_pods.go:89] "kube-proxy-7qp6h" [7869cd1d-785d-401d-aceb-854cffd63d73] Running
	I0603 12:13:06.261866   72964 system_pods.go:89] "kube-scheduler-embed-certs-725022" [ff93e1d0-8bb2-4026-b9d2-1710dd9f18b7] Running
	I0603 12:13:06.261877   72964 system_pods.go:89] "metrics-server-569cc877fc-jgmbs" [148d8ece-e094-4df9-989a-1bc59a33b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:13:06.261888   72964 system_pods.go:89] "storage-provisioner" [cde9aa2d-6a26-4f83-b5df-ae24b22df27a] Running
	I0603 12:13:06.261898   72964 system_pods.go:126] duration metric: took 203.791167ms to wait for k8s-apps to be running ...
	I0603 12:13:06.261910   72964 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:13:06.261965   72964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:06.277270   72964 system_svc.go:56] duration metric: took 15.351048ms WaitForService to wait for kubelet
	I0603 12:13:06.277313   72964 kubeadm.go:576] duration metric: took 4.203172406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:13:06.277333   72964 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:13:06.458480   72964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:13:06.458508   72964 node_conditions.go:123] node cpu capacity is 2
	I0603 12:13:06.458519   72964 node_conditions.go:105] duration metric: took 181.181522ms to run NodePressure ...
	I0603 12:13:06.458530   72964 start.go:240] waiting for startup goroutines ...
	I0603 12:13:06.458536   72964 start.go:245] waiting for cluster config update ...
	I0603 12:13:06.458546   72964 start.go:254] writing updated cluster config ...
	I0603 12:13:06.458796   72964 ssh_runner.go:195] Run: rm -f paused
	I0603 12:13:06.511692   72964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:13:06.513617   72964 out.go:177] * Done! kubectl is now configured to use "embed-certs-725022" cluster and "default" namespace by default
	I0603 12:13:32.215819   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:13:32.216031   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:13:32.216075   73662 kubeadm.go:309] 
	I0603 12:13:32.216149   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:13:32.216254   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:13:32.216284   73662 kubeadm.go:309] 
	I0603 12:13:32.216349   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:13:32.216394   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:13:32.216554   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:13:32.216577   73662 kubeadm.go:309] 
	I0603 12:13:32.216688   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:13:32.216722   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:13:32.216764   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:13:32.216773   73662 kubeadm.go:309] 
	I0603 12:13:32.216888   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:13:32.217006   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:13:32.217031   73662 kubeadm.go:309] 
	I0603 12:13:32.217165   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:13:32.217278   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:13:32.217412   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:13:32.217594   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:13:32.217618   73662 kubeadm.go:309] 
	I0603 12:13:32.218376   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:13:32.218449   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:13:32.218578   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 12:13:32.218719   73662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 12:13:32.218776   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 12:13:32.678357   73662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:13:32.693276   73662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:13:32.702964   73662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:13:32.702986   73662 kubeadm.go:156] found existing configuration files:
	
	I0603 12:13:32.703025   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:13:32.712508   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:13:32.712555   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:13:32.722219   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:13:32.731648   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:13:32.731702   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:13:32.741195   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.750711   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:13:32.750764   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:13:32.760654   73662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:13:32.769838   73662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:13:32.769881   73662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:13:32.780973   73662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:13:32.850830   73662 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 12:13:32.850883   73662 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:13:32.999201   73662 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:13:32.999328   73662 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:13:32.999428   73662 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:13:33.184771   73662 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:13:33.187327   73662 out.go:204]   - Generating certificates and keys ...
	I0603 12:13:33.187398   73662 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:13:33.187487   73662 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:13:33.187586   73662 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:13:33.187682   73662 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 12:13:33.187788   73662 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:13:33.187887   73662 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 12:13:33.187981   73662 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 12:13:33.188107   73662 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:13:33.188522   73662 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:13:33.188801   73662 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:13:33.188880   73662 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 12:13:33.188991   73662 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:13:33.334289   73662 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:13:33.523806   73662 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:13:33.699531   73662 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:13:33.750555   73662 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:13:33.769976   73662 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:13:33.770924   73662 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:13:33.770986   73662 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:13:33.921095   73662 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:13:33.923915   73662 out.go:204]   - Booting up control plane ...
	I0603 12:13:33.924071   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:13:33.930998   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:13:33.934088   73662 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:13:33.935783   73662 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:13:33.939727   73662 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 12:14:13.940542   73662 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 12:14:13.940993   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:13.941324   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:18.941485   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:18.941730   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:28.942021   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:28.942229   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:14:48.942823   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:14:48.943115   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944455   73662 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 12:15:28.944758   73662 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 12:15:28.944781   73662 kubeadm.go:309] 
	I0603 12:15:28.944835   73662 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 12:15:28.944914   73662 kubeadm.go:309] 		timed out waiting for the condition
	I0603 12:15:28.944925   73662 kubeadm.go:309] 
	I0603 12:15:28.944965   73662 kubeadm.go:309] 	This error is likely caused by:
	I0603 12:15:28.945008   73662 kubeadm.go:309] 		- The kubelet is not running
	I0603 12:15:28.945152   73662 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 12:15:28.945168   73662 kubeadm.go:309] 
	I0603 12:15:28.945322   73662 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 12:15:28.945378   73662 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 12:15:28.945423   73662 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 12:15:28.945433   73662 kubeadm.go:309] 
	I0603 12:15:28.945568   73662 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 12:15:28.945695   73662 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 12:15:28.945717   73662 kubeadm.go:309] 
	I0603 12:15:28.945883   73662 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 12:15:28.946014   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 12:15:28.946123   73662 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 12:15:28.946234   73662 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 12:15:28.946263   73662 kubeadm.go:309] 
	I0603 12:15:28.947236   73662 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:15:28.947323   73662 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 12:15:28.947455   73662 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 12:15:28.947531   73662 kubeadm.go:393] duration metric: took 7m57.88734097s to StartCluster
	I0603 12:15:28.947585   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:15:28.947638   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:15:28.993664   73662 cri.go:89] found id: ""
	I0603 12:15:28.993694   73662 logs.go:276] 0 containers: []
	W0603 12:15:28.993705   73662 logs.go:278] No container was found matching "kube-apiserver"
	I0603 12:15:28.993712   73662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:15:28.993774   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:15:29.030686   73662 cri.go:89] found id: ""
	I0603 12:15:29.030720   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.030730   73662 logs.go:278] No container was found matching "etcd"
	I0603 12:15:29.030738   73662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:15:29.030803   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:15:29.067047   73662 cri.go:89] found id: ""
	I0603 12:15:29.067076   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.067086   73662 logs.go:278] No container was found matching "coredns"
	I0603 12:15:29.067092   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:15:29.067154   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:15:29.107392   73662 cri.go:89] found id: ""
	I0603 12:15:29.107416   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.107424   73662 logs.go:278] No container was found matching "kube-scheduler"
	I0603 12:15:29.107430   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:15:29.107483   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:15:29.159886   73662 cri.go:89] found id: ""
	I0603 12:15:29.159916   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.159925   73662 logs.go:278] No container was found matching "kube-proxy"
	I0603 12:15:29.159934   73662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:15:29.159994   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:15:29.195187   73662 cri.go:89] found id: ""
	I0603 12:15:29.195218   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.195229   73662 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 12:15:29.195236   73662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:15:29.195295   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:15:29.233622   73662 cri.go:89] found id: ""
	I0603 12:15:29.233648   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.233656   73662 logs.go:278] No container was found matching "kindnet"
	I0603 12:15:29.233662   73662 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 12:15:29.233717   73662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 12:15:29.272849   73662 cri.go:89] found id: ""
	I0603 12:15:29.272874   73662 logs.go:276] 0 containers: []
	W0603 12:15:29.272882   73662 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 12:15:29.272891   73662 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:15:29.272901   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:15:29.383220   73662 logs.go:123] Gathering logs for container status ...
	I0603 12:15:29.383256   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:15:29.424045   73662 logs.go:123] Gathering logs for kubelet ...
	I0603 12:15:29.424076   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 12:15:29.475712   73662 logs.go:123] Gathering logs for dmesg ...
	I0603 12:15:29.475743   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:15:29.489841   73662 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:15:29.489868   73662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 12:15:29.572988   73662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 12:15:29.573030   73662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 12:15:29.573068   73662 out.go:239] * 
	W0603 12:15:29.573117   73662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.573138   73662 out.go:239] * 
	W0603 12:15:29.573869   73662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:15:29.577458   73662 out.go:177] 
	W0603 12:15:29.578659   73662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 12:15:29.578700   73662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 12:15:29.578716   73662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 12:15:29.580176   73662 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.575562537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417621575532970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cf2fce4-eb8a-4c5f-bb2e-783dbb0e0721 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.576068405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb1de3a5-be1d-4b1f-8b1c-ab734c8bf71b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.576153849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb1de3a5-be1d-4b1f-8b1c-ab734c8bf71b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.576235907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb1de3a5-be1d-4b1f-8b1c-ab734c8bf71b name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.607810950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd78fc0f-def9-4641-88d1-11c767e1d226 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.607880427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd78fc0f-def9-4641-88d1-11c767e1d226 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.608996097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eee0c328-1058-4e88-918e-eba69178d026 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.609476530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417621609452075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eee0c328-1058-4e88-918e-eba69178d026 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.609978935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9d0a46e-5bc6-40fa-8f11-60c1e4b4c374 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.610028134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9d0a46e-5bc6-40fa-8f11-60c1e4b4c374 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.610056703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f9d0a46e-5bc6-40fa-8f11-60c1e4b4c374 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.642903317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a3ec445-e871-4b38-8180-183c3139def8 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.642975903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a3ec445-e871-4b38-8180-183c3139def8 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.644024831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64de76c5-3221-4ab2-9536-c9aacf3c0f87 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.644491821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417621644469905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64de76c5-3221-4ab2-9536-c9aacf3c0f87 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.645032731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a2c9e17-0599-4efd-b0a0-a22d0489aa1a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.645109923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a2c9e17-0599-4efd-b0a0-a22d0489aa1a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.645146925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a2c9e17-0599-4efd-b0a0-a22d0489aa1a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.677926917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b4937c7-d4e3-4330-9fdd-a6b42806d068 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.678018709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b4937c7-d4e3-4330-9fdd-a6b42806d068 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.679283369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=941f0187-b9c2-457e-aa06-4bbe7626c490 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.679696794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417621679674210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=941f0187-b9c2-457e-aa06-4bbe7626c490 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.680316626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b3f4277-88d5-4546-b300-4b1ca0626e72 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.680384868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b3f4277-88d5-4546-b300-4b1ca0626e72 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:27:01 old-k8s-version-905554 crio[644]: time="2024-06-03 12:27:01.680431386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0b3f4277-88d5-4546-b300-4b1ca0626e72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067618] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.055262] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.836862] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470521] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.722215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.941743] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.062404] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063439] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.196803] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.150293] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.306355] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.730916] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.064798] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.734692] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.126331] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 12:11] systemd-fstab-generator[5043]: Ignoring "noauto" option for root device
	[Jun 3 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.071630] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:27:01 up 19 min,  0 users,  load average: 0.00, 0.01, 0.04
	Linux old-k8s-version-905554 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000159200, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0007a79b0, 0x24, 0x0, ...)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: net.(*Dialer).DialContext(0xc000bd2840, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007a79b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bdb140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007a79b0, 0x24, 0x60, 0x7f9e9be7b358, 0x118, ...)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: net/http.(*Transport).dial(0xc000474000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007a79b0, 0x24, 0x0, 0x2, 0x2, ...)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: net/http.(*Transport).dialConn(0xc000474000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0003ac480, 0x5, 0xc0007a79b0, 0x24, 0x0, 0xc0006be5a0, ...)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: net/http.(*Transport).dialConnFor(0xc000474000, 0xc00094e0b0)
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]: created by net/http.(*Transport).queueForDial
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6828]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jun 03 12:27:00 old-k8s-version-905554 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 03 12:27:00 old-k8s-version-905554 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 03 12:27:00 old-k8s-version-905554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Jun 03 12:27:00 old-k8s-version-905554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 12:27:00 old-k8s-version-905554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6855]: I0603 12:27:00.862741    6855 server.go:416] Version: v1.20.0
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6855]: I0603 12:27:00.863845    6855 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6855]: I0603 12:27:00.871957    6855 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6855]: I0603 12:27:00.873489    6855 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jun 03 12:27:00 old-k8s-version-905554 kubelet[6855]: W0603 12:27:00.873539    6855 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 2 (223.122015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-905554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (146.72s)

                                                
                                    

Test pass (251/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 13.1
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.12
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.54
22 TestOffline 100.5
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 191.39
29 TestAddons/parallel/Registry 16.76
31 TestAddons/parallel/InspektorGadget 11.83
33 TestAddons/parallel/HelmTiller 13.69
35 TestAddons/parallel/CSI 57.69
36 TestAddons/parallel/Headlamp 13.88
37 TestAddons/parallel/CloudSpanner 5.55
38 TestAddons/parallel/LocalPath 13.04
39 TestAddons/parallel/NvidiaDevicePlugin 6.52
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.11
46 TestCertOptions 73.22
47 TestCertExpiration 276.85
49 TestForceSystemdFlag 56.81
50 TestForceSystemdEnv 70.74
52 TestKVMDriverInstallOrUpdate 5.05
56 TestErrorSpam/setup 40.98
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.53
60 TestErrorSpam/unpause 1.53
61 TestErrorSpam/stop 5.24
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 97.98
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 61
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
73 TestFunctional/serial/CacheCmd/cache/add_local 2.2
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 33.33
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.4
85 TestFunctional/serial/InvalidService 4.82
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 19.94
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 1.33
95 TestFunctional/parallel/ServiceCmdConnect 11.59
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 51.33
99 TestFunctional/parallel/SSHCmd 0.43
100 TestFunctional/parallel/CpCmd 1.29
101 TestFunctional/parallel/MySQL 28.9
102 TestFunctional/parallel/FileSync 0.39
103 TestFunctional/parallel/CertSync 1.27
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.68
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.69
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.56
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
128 TestFunctional/parallel/ImageCommands/ImageBuild 5.72
129 TestFunctional/parallel/ImageCommands/Setup 2.13
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.02
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.64
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.2
133 TestFunctional/parallel/ServiceCmd/List 0.33
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
136 TestFunctional/parallel/ServiceCmd/Format 0.39
137 TestFunctional/parallel/ServiceCmd/URL 0.41
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
139 TestFunctional/parallel/ProfileCmd/profile_list 0.36
140 TestFunctional/parallel/MountCmd/any-port 10.74
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.13
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
146 TestFunctional/parallel/MountCmd/specific-port 1.68
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.32
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 274.83
158 TestMultiControlPlane/serial/DeployApp 6.31
159 TestMultiControlPlane/serial/PingHostFromPods 1.19
160 TestMultiControlPlane/serial/AddWorkerNode 45.35
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
163 TestMultiControlPlane/serial/CopyFile 12.41
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 314.83
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 73.28
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 55.83
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.75
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.36
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 86.24
211 TestMountStart/serial/StartWithMountFirst 27.27
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 23.78
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.58
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 99.95
223 TestMultiNode/serial/DeployApp2Nodes 6.08
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 41.08
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.2
228 TestMultiNode/serial/CopyFile 6.83
229 TestMultiNode/serial/StopNode 2.34
230 TestMultiNode/serial/StartAfterStop 29.09
232 TestMultiNode/serial/DeleteNode 2.17
234 TestMultiNode/serial/RestartMultiNode 202.3
235 TestMultiNode/serial/ValidateNameConflict 45.45
242 TestScheduledStopUnix 116.6
246 TestRunningBinaryUpgrade 223.71
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 93.26
260 TestNetworkPlugins/group/false 3
265 TestPause/serial/Start 70.95
266 TestNoKubernetes/serial/StartWithStopK8s 41.97
274 TestNoKubernetes/serial/Start 52.74
275 TestPause/serial/SecondStartNoReconfiguration 70.21
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
277 TestNoKubernetes/serial/ProfileList 27.27
278 TestNoKubernetes/serial/Stop 1.37
279 TestNoKubernetes/serial/StartNoArgs 21.34
280 TestPause/serial/Pause 0.76
281 TestPause/serial/VerifyStatus 0.27
282 TestPause/serial/Unpause 0.68
283 TestPause/serial/PauseAgain 0.86
284 TestPause/serial/DeletePaused 0.78
285 TestPause/serial/VerifyDeletedResources 1.41
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
287 TestStoppedBinaryUpgrade/Setup 3.1
288 TestStoppedBinaryUpgrade/Upgrade 125.11
289 TestNetworkPlugins/group/auto/Start 124.62
290 TestNetworkPlugins/group/custom-flannel/Start 87.46
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
292 TestNetworkPlugins/group/kindnet/Start 64.3
293 TestNetworkPlugins/group/auto/KubeletFlags 0.2
294 TestNetworkPlugins/group/auto/NetCatPod 10.23
295 TestNetworkPlugins/group/auto/DNS 0.27
296 TestNetworkPlugins/group/auto/Localhost 0.19
297 TestNetworkPlugins/group/auto/HairPin 0.19
298 TestNetworkPlugins/group/flannel/Start 83.42
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/custom-flannel/DNS 0.16
303 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
304 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
306 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
307 TestNetworkPlugins/group/kindnet/DNS 0.19
308 TestNetworkPlugins/group/enable-default-cni/Start 99.8
309 TestNetworkPlugins/group/kindnet/Localhost 0.15
310 TestNetworkPlugins/group/kindnet/HairPin 0.15
311 TestNetworkPlugins/group/bridge/Start 110.83
312 TestNetworkPlugins/group/flannel/ControllerPod 6.01
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
314 TestNetworkPlugins/group/flannel/NetCatPod 11.23
315 TestNetworkPlugins/group/flannel/DNS 0.2
316 TestNetworkPlugins/group/flannel/Localhost 0.17
317 TestNetworkPlugins/group/flannel/HairPin 0.16
318 TestNetworkPlugins/group/calico/Start 102.39
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
325 TestNetworkPlugins/group/bridge/NetCatPod 11.28
329 TestStartStop/group/no-preload/serial/FirstStart 134.4
330 TestNetworkPlugins/group/bridge/DNS 0.2
331 TestNetworkPlugins/group/bridge/Localhost 0.19
332 TestNetworkPlugins/group/bridge/HairPin 0.16
334 TestStartStop/group/embed-certs/serial/FirstStart 95.52
335 TestNetworkPlugins/group/calico/ControllerPod 6.01
336 TestNetworkPlugins/group/calico/KubeletFlags 0.23
337 TestNetworkPlugins/group/calico/NetCatPod 12.46
338 TestNetworkPlugins/group/calico/DNS 0.15
339 TestNetworkPlugins/group/calico/Localhost 0.13
340 TestNetworkPlugins/group/calico/HairPin 0.13
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.64
343 TestStartStop/group/embed-certs/serial/DeployApp 11.31
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
346 TestStartStop/group/no-preload/serial/DeployApp 10.28
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
355 TestStartStop/group/embed-certs/serial/SecondStart 681.92
358 TestStartStop/group/no-preload/serial/SecondStart 614.75
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 606.06
360 TestStartStop/group/old-k8s-version/serial/Stop 2.45
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
372 TestStartStop/group/newest-cni/serial/FirstStart 56.83
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
375 TestStartStop/group/newest-cni/serial/Stop 7.33
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
377 TestStartStop/group/newest-cni/serial/SecondStart 34.3
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
381 TestStartStop/group/newest-cni/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (26.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-730853 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-730853 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.014709998s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-730853
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-730853: exit status 85 (56.339707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-730853 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC |          |
	|         | -p download-only-730853        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:38:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:38:20.165358   15040 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:38:20.165454   15040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:38:20.165464   15040 out.go:304] Setting ErrFile to fd 2...
	I0603 10:38:20.165468   15040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:38:20.165651   15040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	W0603 10:38:20.165771   15040 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19008-7755/.minikube/config/config.json: open /home/jenkins/minikube-integration/19008-7755/.minikube/config/config.json: no such file or directory
	I0603 10:38:20.166317   15040 out.go:298] Setting JSON to true
	I0603 10:38:20.167186   15040 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1245,"bootTime":1717409855,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:38:20.167238   15040 start.go:139] virtualization: kvm guest
	I0603 10:38:20.169690   15040 out.go:97] [download-only-730853] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:38:20.171029   15040 out.go:169] MINIKUBE_LOCATION=19008
	W0603 10:38:20.169787   15040 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball: no such file or directory
	I0603 10:38:20.169830   15040 notify.go:220] Checking for updates...
	I0603 10:38:20.173552   15040 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:38:20.174826   15040 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:38:20.176129   15040 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:38:20.177334   15040 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0603 10:38:20.179725   15040 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 10:38:20.179934   15040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:38:20.274955   15040 out.go:97] Using the kvm2 driver based on user configuration
	I0603 10:38:20.274982   15040 start.go:297] selected driver: kvm2
	I0603 10:38:20.274993   15040 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:38:20.275309   15040 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:38:20.275422   15040 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:38:20.289702   15040 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:38:20.289759   15040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:38:20.290204   15040 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0603 10:38:20.290353   15040 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 10:38:20.290408   15040 cni.go:84] Creating CNI manager for ""
	I0603 10:38:20.290420   15040 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:38:20.290428   15040 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 10:38:20.290477   15040 start.go:340] cluster config:
	{Name:download-only-730853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-730853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:38:20.291821   15040 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:38:20.293615   15040 out.go:97] Downloading VM boot image ...
	I0603 10:38:20.293646   15040 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 10:38:30.201077   15040 out.go:97] Starting "download-only-730853" primary control-plane node in "download-only-730853" cluster
	I0603 10:38:30.201103   15040 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 10:38:30.310452   15040 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 10:38:30.310482   15040 cache.go:56] Caching tarball of preloaded images
	I0603 10:38:30.310621   15040 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 10:38:30.312474   15040 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0603 10:38:30.312497   15040 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 10:38:30.821372   15040 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 10:38:43.910973   15040 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 10:38:43.911166   15040 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 10:38:44.808452   15040 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 10:38:44.808781   15040 profile.go:143] Saving config to /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/download-only-730853/config.json ...
	I0603 10:38:44.808811   15040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/download-only-730853/config.json: {Name:mk11e9bf9dc00c7ce2b38da13927ffd9371a2f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 10:38:44.808955   15040 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 10:38:44.809111   15040 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-730853 host does not exist
	  To start a cluster, run: "minikube start -p download-only-730853"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-730853
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238243 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238243 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.09650124s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238243
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238243: exit status 85 (54.585713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-730853 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC |                     |
	|         | -p download-only-730853        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| delete  | -p download-only-730853        | download-only-730853 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC | 03 Jun 24 10:38 UTC |
	| start   | -o=json --download-only        | download-only-238243 | jenkins | v1.33.1 | 03 Jun 24 10:38 UTC |                     |
	|         | -p download-only-238243        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 10:38:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 10:38:46.475573   15282 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:38:46.475818   15282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:38:46.475827   15282 out.go:304] Setting ErrFile to fd 2...
	I0603 10:38:46.475831   15282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:38:46.475990   15282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:38:46.476529   15282 out.go:298] Setting JSON to true
	I0603 10:38:46.477353   15282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1271,"bootTime":1717409855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:38:46.477405   15282 start.go:139] virtualization: kvm guest
	I0603 10:38:46.479737   15282 out.go:97] [download-only-238243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:38:46.481274   15282 out.go:169] MINIKUBE_LOCATION=19008
	I0603 10:38:46.479857   15282 notify.go:220] Checking for updates...
	I0603 10:38:46.483923   15282 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:38:46.485275   15282 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:38:46.486499   15282 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:38:46.487619   15282 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0603 10:38:46.489852   15282 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 10:38:46.490031   15282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:38:46.520534   15282 out.go:97] Using the kvm2 driver based on user configuration
	I0603 10:38:46.520570   15282 start.go:297] selected driver: kvm2
	I0603 10:38:46.520580   15282 start.go:901] validating driver "kvm2" against <nil>
	I0603 10:38:46.520932   15282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:38:46.521025   15282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19008-7755/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 10:38:46.535664   15282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 10:38:46.535706   15282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 10:38:46.536288   15282 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0603 10:38:46.536522   15282 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 10:38:46.536588   15282 cni.go:84] Creating CNI manager for ""
	I0603 10:38:46.536606   15282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 10:38:46.536617   15282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 10:38:46.536681   15282 start.go:340] cluster config:
	{Name:download-only-238243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-238243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:38:46.536788   15282 iso.go:125] acquiring lock: {Name:mkdc8e745fc6a0fd8e502f6ad2510510ae9abf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 10:38:46.538426   15282 out.go:97] Starting "download-only-238243" primary control-plane node in "download-only-238243" cluster
	I0603 10:38:46.538440   15282 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:38:46.646352   15282 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 10:38:46.646387   15282 cache.go:56] Caching tarball of preloaded images
	I0603 10:38:46.646549   15282 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 10:38:46.648469   15282 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0603 10:38:46.648493   15282 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0603 10:38:46.756491   15282 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/19008-7755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-238243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238243
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-373654 --alsologtostderr --binary-mirror http://127.0.0.1:46559 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-373654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-373654
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (100.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-125275 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-125275 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.147195181s)
helpers_test.go:175: Cleaning up "offline-crio-125275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-125275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-125275: (1.354097365s)
--- PASS: TestOffline (100.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-926744
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-926744: exit status 85 (50.707446ms)

                                                
                                                
-- stdout --
	* Profile "addons-926744" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-926744"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-926744
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-926744: exit status 85 (51.289066ms)

                                                
                                                
-- stdout --
	* Profile "addons-926744" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-926744"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (191.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-926744 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-926744 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m11.388512743s)
--- PASS: TestAddons/Setup (191.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 22.407065ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v8sfs" [ae4c2ffe-ab57-4327-a6c0-25504bcd327b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005173343s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mhm9h" [28fbb401-9bee-4e8b-98e2-67e9fbcc54d4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00734065s
addons_test.go:342: (dbg) Run:  kubectl --context addons-926744 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-926744 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-926744 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.011236151s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 ip
2024/06/03 10:42:28 [DEBUG] GET http://192.168.39.188:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-spqh9" [a5fee458-a39c-488b-af0b-9ea2a6cf30cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004888122s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-926744
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-926744: (5.826002715s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 22.755585ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-9kcxj" [fc636068-af58-4546-9600-7cee9712ca32] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005942054s
addons_test.go:475: (dbg) Run:  kubectl --context addons-926744 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-926744 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.036368249s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.141373ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-926744 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-926744 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c60856aa-5c27-42ed-bda9-1772b41f5592] Pending
helpers_test.go:344: "task-pv-pod" [c60856aa-5c27-42ed-bda9-1772b41f5592] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c60856aa-5c27-42ed-bda9-1772b41f5592] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004329587s
addons_test.go:586: (dbg) Run:  kubectl --context addons-926744 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-926744 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-926744 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-926744 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-926744 delete pod task-pv-pod: (1.363728693s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-926744 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-926744 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-926744 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c7e8ebfd-6ec0-46ce-9c28-04b41a1fb4be] Pending
helpers_test.go:344: "task-pv-pod-restore" [c7e8ebfd-6ec0-46ce-9c28-04b41a1fb4be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c7e8ebfd-6ec0-46ce-9c28-04b41a1fb4be] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005674419s
addons_test.go:628: (dbg) Run:  kubectl --context addons-926744 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-926744 delete pod task-pv-pod-restore: (1.689978385s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-926744 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-926744 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-926744 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.722456341s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-926744 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-7jxcw" [61e5ce61-19bd-4190-a787-83d69ca4a957] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-7jxcw" [61e5ce61-19bd-4190-a787-83d69ca4a957] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003692865s
--- PASS: TestAddons/parallel/Headlamp (13.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xxlsx" [b6cdda5b-66b9-4e1b-8875-80b815fbf958] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004386361s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-926744
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-926744 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-926744 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3d71f8c8-433a-422d-92f1-2e581196eb68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3d71f8c8-433a-422d-92f1-2e581196eb68] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3d71f8c8-433a-422d-92f1-2e581196eb68] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004534159s
addons_test.go:992: (dbg) Run:  kubectl --context addons-926744 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 ssh "cat /opt/local-path-provisioner/pvc-c91d9397-ba00-4758-81d9-86e4e7e60cde_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-926744 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-926744 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-926744 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xsjk2" [6e714474-e47d-438a-8c5f-6f4fc07169af] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005306271s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-926744
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-ljsqm" [4efec2ba-9b7e-4693-984d-3f075be141e3] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003991323s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-926744 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-926744 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (73.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-430151 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-430151 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m12.027116543s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-430151 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-430151 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-430151 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-430151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-430151
--- PASS: TestCertOptions (73.22s)

                                                
                                    
x
+
TestCertExpiration (276.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-949809 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-949809 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.166711629s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-949809 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-949809 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.921697301s)
helpers_test.go:175: Cleaning up "cert-expiration-949809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-949809
--- PASS: TestCertExpiration (276.85s)

                                                
                                    
x
+
TestForceSystemdFlag (56.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-339689 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-339689 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.831947128s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-339689 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-339689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-339689
--- PASS: TestForceSystemdFlag (56.81s)

                                                
                                    
x
+
TestForceSystemdEnv (70.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-164387 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-164387 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.961829185s)
helpers_test.go:175: Cleaning up "force-systemd-env-164387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-164387
--- PASS: TestForceSystemdEnv (70.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.05s)

                                                
                                    
x
+
TestErrorSpam/setup (40.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-018396 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-018396 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-018396 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-018396 --driver=kvm2  --container-runtime=crio: (40.978652515s)
--- PASS: TestErrorSpam/setup (40.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop: (2.279825311s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop: (1.11659987s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-018396 --log_dir /tmp/nospam-018396 stop: (1.840805637s)
--- PASS: TestErrorSpam/stop (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19008-7755/.minikube/files/etc/test/nested/copy/15028/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0603 10:52:12.037849   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.044536   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.054750   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.074971   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.115238   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.195525   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.355903   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:12.676459   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:13.317347   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:14.597821   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:17.158632   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:22.279546   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:32.519841   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:52:53.000311   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-835483 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.983855265s)
--- PASS: TestFunctional/serial/StartWithProxy (97.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --alsologtostderr -v=8
E0603 10:53:33.962390   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-835483 --alsologtostderr -v=8: (1m0.997185237s)
functional_test.go:659: soft start took 1m0.997864933s for "functional-835483" cluster.
--- PASS: TestFunctional/serial/SoftStart (61.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-835483 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 cache add registry.k8s.io/pause:3.3: (1.138260375s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-835483 /tmp/TestFunctionalserialCacheCmdcacheadd_local3256738013/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache add minikube-local-cache-test:functional-835483
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 cache add minikube-local-cache-test:functional-835483: (1.856119861s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache delete minikube-local-cache-test:functional-835483
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-835483
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (200.560798ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 kubectl -- --context functional-835483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-835483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0603 10:54:55.883015   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-835483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.326709044s)
functional_test.go:757: restart took 33.326826685s for "functional-835483" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-835483 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 logs: (1.332704293s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 logs --file /tmp/TestFunctionalserialLogsFileCmd2741353836/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 logs --file /tmp/TestFunctionalserialLogsFileCmd2741353836/001/logs.txt: (1.396108959s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-835483 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-835483
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-835483: exit status 115 (263.244894ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.127:32187 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-835483 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-835483 delete -f testdata/invalidsvc.yaml: (1.35309342s)
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 config get cpus: exit status 14 (52.415487ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 config get cpus: exit status 14 (45.859384ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-835483 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-835483 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24109: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-835483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (292.302734ms)

                                                
                                                
-- stdout --
	* [functional-835483] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 10:55:33.403274   23842 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:55:33.404720   23842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:55:33.404735   23842 out.go:304] Setting ErrFile to fd 2...
	I0603 10:55:33.406671   23842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:55:33.407107   23842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:55:33.407819   23842 out.go:298] Setting JSON to false
	I0603 10:55:33.408984   23842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2278,"bootTime":1717409855,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:55:33.409060   23842 start.go:139] virtualization: kvm guest
	I0603 10:55:33.411323   23842 out.go:177] * [functional-835483] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 10:55:33.413161   23842 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:55:33.413769   23842 notify.go:220] Checking for updates...
	I0603 10:55:33.416235   23842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:55:33.417824   23842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:55:33.420432   23842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:55:33.421896   23842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:55:33.432212   23842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:55:33.434176   23842 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:55:33.434976   23842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:55:33.435089   23842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:55:33.462745   23842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0603 10:55:33.463079   23842 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:55:33.463729   23842 main.go:141] libmachine: Using API Version  1
	I0603 10:55:33.463752   23842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:55:33.465059   23842 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:55:33.465350   23842 main.go:141] libmachine: (functional-835483) Calling .DriverName
	I0603 10:55:33.465725   23842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:55:33.466184   23842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:55:33.466233   23842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:55:33.481892   23842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0603 10:55:33.482454   23842 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:55:33.482899   23842 main.go:141] libmachine: Using API Version  1
	I0603 10:55:33.482916   23842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:55:33.483324   23842 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:55:33.483514   23842 main.go:141] libmachine: (functional-835483) Calling .DriverName
	I0603 10:55:33.605028   23842 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 10:55:33.606862   23842 start.go:297] selected driver: kvm2
	I0603 10:55:33.606887   23842 start.go:901] validating driver "kvm2" against &{Name:functional-835483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-835483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:55:33.607023   23842 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:55:33.609179   23842 out.go:177] 
	W0603 10:55:33.610642   23842 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0603 10:55:33.612138   23842 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-835483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-835483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.406726ms)

                                                
                                                
-- stdout --
	* [functional-835483] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 10:55:33.823493   23971 out.go:291] Setting OutFile to fd 1 ...
	I0603 10:55:33.824003   23971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:55:33.824017   23971 out.go:304] Setting ErrFile to fd 2...
	I0603 10:55:33.824023   23971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 10:55:33.824608   23971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 10:55:33.825542   23971 out.go:298] Setting JSON to false
	I0603 10:55:33.826429   23971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2279,"bootTime":1717409855,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 10:55:33.826490   23971 start.go:139] virtualization: kvm guest
	I0603 10:55:33.828468   23971 out.go:177] * [functional-835483] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0603 10:55:33.830550   23971 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 10:55:33.830559   23971 notify.go:220] Checking for updates...
	I0603 10:55:33.832016   23971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 10:55:33.833695   23971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 10:55:33.835126   23971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 10:55:33.836536   23971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 10:55:33.838021   23971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 10:55:33.839795   23971 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 10:55:33.840242   23971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:55:33.840283   23971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:55:33.855390   23971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0603 10:55:33.855780   23971 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:55:33.856289   23971 main.go:141] libmachine: Using API Version  1
	I0603 10:55:33.856308   23971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:55:33.856611   23971 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:55:33.856786   23971 main.go:141] libmachine: (functional-835483) Calling .DriverName
	I0603 10:55:33.857004   23971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 10:55:33.857286   23971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 10:55:33.857317   23971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 10:55:33.871462   23971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0603 10:55:33.871952   23971 main.go:141] libmachine: () Calling .GetVersion
	I0603 10:55:33.872500   23971 main.go:141] libmachine: Using API Version  1
	I0603 10:55:33.872520   23971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 10:55:33.872799   23971 main.go:141] libmachine: () Calling .GetMachineName
	I0603 10:55:33.873021   23971 main.go:141] libmachine: (functional-835483) Calling .DriverName
	I0603 10:55:33.905602   23971 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0603 10:55:33.906991   23971 start.go:297] selected driver: kvm2
	I0603 10:55:33.907005   23971 start.go:901] validating driver "kvm2" against &{Name:functional-835483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-835483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 10:55:33.907234   23971 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 10:55:33.909833   23971 out.go:177] 
	W0603 10:55:33.911138   23971 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0603 10:55:33.912577   23971 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-835483 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-835483 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-nzz7n" [60682510-d307-4e4f-a484-25c46432035a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-nzz7n" [60682510-d307-4e4f-a484-25c46432035a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004206974s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.127:32151
functional_test.go:1671: http://192.168.39.127:32151: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-nzz7n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.127:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.127:32151
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6406c848-6575-4265-801d-c3cf7455f8b6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004305566s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-835483 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-835483 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-835483 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-835483 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-835483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7fe493a-687c-4a93-ad93-b0d332f2fff8] Pending
helpers_test.go:344: "sp-pod" [c7fe493a-687c-4a93-ad93-b0d332f2fff8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7fe493a-687c-4a93-ad93-b0d332f2fff8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004688409s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-835483 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-835483 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-835483 delete -f testdata/storage-provisioner/pod.yaml: (1.439368188s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-835483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e37a2e69-67fb-4673-bcfa-83f713d82408] Pending
helpers_test.go:344: "sp-pod" [e37a2e69-67fb-4673-bcfa-83f713d82408] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e37a2e69-67fb-4673-bcfa-83f713d82408] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004375206s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-835483 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh -n functional-835483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cp functional-835483:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3859814513/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh -n functional-835483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh -n functional-835483 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-835483 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8fbzz" [6e7b296e-0ab0-4d03-98ef-5c3d06ec6b30] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8fbzz" [6e7b296e-0ab0-4d03-98ef-5c3d06ec6b30] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004203662s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-835483 exec mysql-64454c8b5c-8fbzz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-835483 exec mysql-64454c8b5c-8fbzz -- mysql -ppassword -e "show databases;": exit status 1 (143.227412ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-835483 exec mysql-64454c8b5c-8fbzz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-835483 exec mysql-64454c8b5c-8fbzz -- mysql -ppassword -e "show databases;": exit status 1 (148.116895ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-835483 exec mysql-64454c8b5c-8fbzz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15028/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /etc/test/nested/copy/15028/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15028.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /etc/ssl/certs/15028.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15028.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /usr/share/ca-certificates/15028.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/150282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /etc/ssl/certs/150282.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/150282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /usr/share/ca-certificates/150282.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-835483 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "sudo systemctl is-active docker": exit status 1 (225.355707ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "sudo systemctl is-active containerd": exit status 1 (225.947005ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-835483 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-835483 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-mr66j" [4cde68ee-9a20-4db3-a01b-c6e7458104d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-mr66j" [4cde68ee-9a20-4db3-a01b-c6e7458104d9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003929669s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-835483 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-835483
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-835483
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-835483 image ls --format short --alsologtostderr:
I0603 10:55:47.211950   25058 out.go:291] Setting OutFile to fd 1 ...
I0603 10:55:47.212244   25058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:47.212257   25058 out.go:304] Setting ErrFile to fd 2...
I0603 10:55:47.212264   25058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:47.212522   25058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
I0603 10:55:47.213319   25058 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:47.213456   25058 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:47.214032   25058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:47.214088   25058 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:47.229517   25058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
I0603 10:55:47.230013   25058 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:47.230599   25058 main.go:141] libmachine: Using API Version  1
I0603 10:55:47.230624   25058 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:47.230978   25058 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:47.231189   25058 main.go:141] libmachine: (functional-835483) Calling .GetState
I0603 10:55:47.233163   25058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:47.233207   25058 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:47.247970   25058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
I0603 10:55:47.248377   25058 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:47.249020   25058 main.go:141] libmachine: Using API Version  1
I0603 10:55:47.249041   25058 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:47.249333   25058 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:47.249520   25058 main.go:141] libmachine: (functional-835483) Calling .DriverName
I0603 10:55:47.249729   25058 ssh_runner.go:195] Run: systemctl --version
I0603 10:55:47.249751   25058 main.go:141] libmachine: (functional-835483) Calling .GetSSHHostname
I0603 10:55:47.252558   25058 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:47.252960   25058 main.go:141] libmachine: (functional-835483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c7:14", ip: ""} in network mk-functional-835483: {Iface:virbr1 ExpiryTime:2024-06-03 11:52:05 +0000 UTC Type:0 Mac:52:54:00:e6:c7:14 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:functional-835483 Clientid:01:52:54:00:e6:c7:14}
I0603 10:55:47.252983   25058 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined IP address 192.168.39.127 and MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:47.253216   25058 main.go:141] libmachine: (functional-835483) Calling .GetSSHPort
I0603 10:55:47.253414   25058 main.go:141] libmachine: (functional-835483) Calling .GetSSHKeyPath
I0603 10:55:47.253595   25058 main.go:141] libmachine: (functional-835483) Calling .GetSSHUsername
I0603 10:55:47.253736   25058 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/functional-835483/id_rsa Username:docker}
I0603 10:55:47.370769   25058 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 10:55:47.451794   25058 main.go:141] libmachine: Making call to close driver server
I0603 10:55:47.451812   25058 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:47.452084   25058 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:47.452106   25058 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
I0603 10:55:47.452114   25058 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:47.452122   25058 main.go:141] libmachine: Making call to close driver server
I0603 10:55:47.452130   25058 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:47.452343   25058 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:47.452355   25058 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-835483 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-835483  | e7da2ef86502f | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-835483  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-835483  | f2e557f768643 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| docker.io/library/nginx                 | latest             | 4f67c83422ec7 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-835483 image ls --format table --alsologtostderr:
I0603 10:55:53.890574   25224 out.go:291] Setting OutFile to fd 1 ...
I0603 10:55:53.890799   25224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:53.890809   25224 out.go:304] Setting ErrFile to fd 2...
I0603 10:55:53.890814   25224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:53.890965   25224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
I0603 10:55:53.891500   25224 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:53.891599   25224 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:53.891933   25224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:53.891980   25224 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:53.907342   25224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
I0603 10:55:53.907808   25224 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:53.908408   25224 main.go:141] libmachine: Using API Version  1
I0603 10:55:53.908436   25224 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:53.908762   25224 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:53.908950   25224 main.go:141] libmachine: (functional-835483) Calling .GetState
I0603 10:55:53.910807   25224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:53.910842   25224 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:53.925988   25224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
I0603 10:55:53.926332   25224 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:53.926728   25224 main.go:141] libmachine: Using API Version  1
I0603 10:55:53.926748   25224 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:53.927006   25224 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:53.927243   25224 main.go:141] libmachine: (functional-835483) Calling .DriverName
I0603 10:55:53.927482   25224 ssh_runner.go:195] Run: systemctl --version
I0603 10:55:53.927509   25224 main.go:141] libmachine: (functional-835483) Calling .GetSSHHostname
I0603 10:55:53.930331   25224 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:53.930689   25224 main.go:141] libmachine: (functional-835483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c7:14", ip: ""} in network mk-functional-835483: {Iface:virbr1 ExpiryTime:2024-06-03 11:52:05 +0000 UTC Type:0 Mac:52:54:00:e6:c7:14 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:functional-835483 Clientid:01:52:54:00:e6:c7:14}
I0603 10:55:53.930730   25224 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined IP address 192.168.39.127 and MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:53.930870   25224 main.go:141] libmachine: (functional-835483) Calling .GetSSHPort
I0603 10:55:53.931031   25224 main.go:141] libmachine: (functional-835483) Calling .GetSSHKeyPath
I0603 10:55:53.931201   25224 main.go:141] libmachine: (functional-835483) Calling .GetSSHUsername
I0603 10:55:53.931349   25224 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/functional-835483/id_rsa Username:docker}
I0603 10:55:54.098036   25224 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 10:55:54.407747   25224 main.go:141] libmachine: Making call to close driver server
I0603 10:55:54.407768   25224 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:54.408005   25224 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:54.408020   25224 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:54.408027   25224 main.go:141] libmachine: Making call to close driver server
I0603 10:55:54.408034   25224 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:54.408241   25224 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:54.408252   25224 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-835483 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e7da2ef86502fd4531a7bf93b1daa9c561af128145a5fb193e8c36880368e03d","repoDigests":["localhost/minikube-local-cache-test@sha256:13c8bd6ea9bdade3a9ce7af00747947581c1f0237177e06101f2e80aa603a7e3"],"repoTags":["localhost/minikube-local-cache-test:functional-835483"],"size":"3330"},{"id":"f2e557f768643486735c2fffd8aa777731fedcc3fd81d91afcbfbea42623c191","repoDigests":["localhost/my-image@sha256:565e136d44500aaea0c7c7a7abf0a46700dd56b0c6dcd4fde5bacd867c2a7a1d"],"repoTags":["localhost/my-image:functional-835483"],"size":"1
468600"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/add
on-resizer:functional-835483"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.
8"],"size":"97846543"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d","docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232"],"repoTags":["docker.io/library/nginx:latest"],"size":"191814165"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0
d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.i
o/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d6947d4d1b66057e05c36be79efce6690985dd59673a8d2081d727f0ccd50c64","repoDigests":["docker.io/library/38076cc8f669745e72854b0080f397f395376dde931e2392f0ec0d067df4f4c4-tmp@sha256:5d046e81b62b6708257391c9a5b5006aafa101343d5c7c707674f2ebdef45aef"],"repoTags":[],"size":"1466018"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b
4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io
/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-835483 image ls --format json --alsologtostderr:
I0603 10:55:53.534656   25201 out.go:291] Setting OutFile to fd 1 ...
I0603 10:55:53.534781   25201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:53.534792   25201 out.go:304] Setting ErrFile to fd 2...
I0603 10:55:53.534798   25201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:53.534960   25201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
I0603 10:55:53.535543   25201 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:53.535637   25201 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:53.535951   25201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:53.536000   25201 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:53.551057   25201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
I0603 10:55:53.551524   25201 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:53.552135   25201 main.go:141] libmachine: Using API Version  1
I0603 10:55:53.552154   25201 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:53.552531   25201 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:53.552721   25201 main.go:141] libmachine: (functional-835483) Calling .GetState
I0603 10:55:53.554400   25201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:53.554450   25201 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:53.569510   25201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
I0603 10:55:53.569902   25201 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:53.570416   25201 main.go:141] libmachine: Using API Version  1
I0603 10:55:53.570434   25201 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:53.570733   25201 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:53.570907   25201 main.go:141] libmachine: (functional-835483) Calling .DriverName
I0603 10:55:53.571115   25201 ssh_runner.go:195] Run: systemctl --version
I0603 10:55:53.571136   25201 main.go:141] libmachine: (functional-835483) Calling .GetSSHHostname
I0603 10:55:53.573782   25201 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:53.574348   25201 main.go:141] libmachine: (functional-835483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c7:14", ip: ""} in network mk-functional-835483: {Iface:virbr1 ExpiryTime:2024-06-03 11:52:05 +0000 UTC Type:0 Mac:52:54:00:e6:c7:14 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:functional-835483 Clientid:01:52:54:00:e6:c7:14}
I0603 10:55:53.574376   25201 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined IP address 192.168.39.127 and MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:53.574512   25201 main.go:141] libmachine: (functional-835483) Calling .GetSSHPort
I0603 10:55:53.574685   25201 main.go:141] libmachine: (functional-835483) Calling .GetSSHKeyPath
I0603 10:55:53.574840   25201 main.go:141] libmachine: (functional-835483) Calling .GetSSHUsername
I0603 10:55:53.574996   25201 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/functional-835483/id_rsa Username:docker}
I0603 10:55:53.682003   25201 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 10:55:53.952133   25201 main.go:141] libmachine: Making call to close driver server
I0603 10:55:53.952158   25201 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:53.952437   25201 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:53.952453   25201 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:53.952468   25201 main.go:141] libmachine: Making call to close driver server
I0603 10:55:53.952477   25201 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:53.952721   25201 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
I0603 10:55:53.952725   25201 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:53.952745   25201 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-835483 image ls --format yaml --alsologtostderr:
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: e7da2ef86502fd4531a7bf93b1daa9c561af128145a5fb193e8c36880368e03d
repoDigests:
- localhost/minikube-local-cache-test@sha256:13c8bd6ea9bdade3a9ce7af00747947581c1f0237177e06101f2e80aa603a7e3
repoTags:
- localhost/minikube-local-cache-test:functional-835483
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
- docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232
repoTags:
- docker.io/library/nginx:latest
size: "191814165"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-835483
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-835483 image ls --format yaml --alsologtostderr:
I0603 10:55:47.501797   25081 out.go:291] Setting OutFile to fd 1 ...
I0603 10:55:47.501919   25081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:47.501930   25081 out.go:304] Setting ErrFile to fd 2...
I0603 10:55:47.501934   25081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:47.502092   25081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
I0603 10:55:47.502633   25081 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:47.502758   25081 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:47.503161   25081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:47.503207   25081 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:47.518494   25081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
I0603 10:55:47.519355   25081 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:47.520608   25081 main.go:141] libmachine: Using API Version  1
I0603 10:55:47.520638   25081 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:47.520996   25081 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:47.521221   25081 main.go:141] libmachine: (functional-835483) Calling .GetState
I0603 10:55:47.523205   25081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:47.523249   25081 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:47.537962   25081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
I0603 10:55:47.538415   25081 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:47.538981   25081 main.go:141] libmachine: Using API Version  1
I0603 10:55:47.539000   25081 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:47.539383   25081 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:47.539595   25081 main.go:141] libmachine: (functional-835483) Calling .DriverName
I0603 10:55:47.539827   25081 ssh_runner.go:195] Run: systemctl --version
I0603 10:55:47.539854   25081 main.go:141] libmachine: (functional-835483) Calling .GetSSHHostname
I0603 10:55:47.542612   25081 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:47.543025   25081 main.go:141] libmachine: (functional-835483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c7:14", ip: ""} in network mk-functional-835483: {Iface:virbr1 ExpiryTime:2024-06-03 11:52:05 +0000 UTC Type:0 Mac:52:54:00:e6:c7:14 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:functional-835483 Clientid:01:52:54:00:e6:c7:14}
I0603 10:55:47.543070   25081 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined IP address 192.168.39.127 and MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:47.543227   25081 main.go:141] libmachine: (functional-835483) Calling .GetSSHPort
I0603 10:55:47.543411   25081 main.go:141] libmachine: (functional-835483) Calling .GetSSHKeyPath
I0603 10:55:47.543567   25081 main.go:141] libmachine: (functional-835483) Calling .GetSSHUsername
I0603 10:55:47.543707   25081 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/functional-835483/id_rsa Username:docker}
I0603 10:55:47.669998   25081 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 10:55:47.761622   25081 main.go:141] libmachine: Making call to close driver server
I0603 10:55:47.761639   25081 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:47.761927   25081 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:47.761946   25081 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:47.761977   25081 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
I0603 10:55:47.761980   25081 main.go:141] libmachine: Making call to close driver server
I0603 10:55:47.762012   25081 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:47.762227   25081 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:47.762245   25081 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:47.762261   25081 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh pgrep buildkitd: exit status 1 (248.985627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image build -t localhost/my-image:functional-835483 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image build -t localhost/my-image:functional-835483 testdata/build --alsologtostderr: (4.854054048s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-835483 image build -t localhost/my-image:functional-835483 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d6947d4d1b6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-835483
--> f2e557f7686
Successfully tagged localhost/my-image:functional-835483
f2e557f768643486735c2fffd8aa777731fedcc3fd81d91afcbfbea42623c191
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-835483 image build -t localhost/my-image:functional-835483 testdata/build --alsologtostderr:
I0603 10:55:48.058622   25135 out.go:291] Setting OutFile to fd 1 ...
I0603 10:55:48.058964   25135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:48.058976   25135 out.go:304] Setting ErrFile to fd 2...
I0603 10:55:48.058983   25135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 10:55:48.059246   25135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
I0603 10:55:48.059992   25135 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:48.060646   25135 config.go:182] Loaded profile config "functional-835483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 10:55:48.061160   25135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:48.061225   25135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:48.076298   25135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
I0603 10:55:48.076699   25135 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:48.077204   25135 main.go:141] libmachine: Using API Version  1
I0603 10:55:48.077224   25135 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:48.077566   25135 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:48.077759   25135 main.go:141] libmachine: (functional-835483) Calling .GetState
I0603 10:55:48.079678   25135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 10:55:48.079730   25135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 10:55:48.093676   25135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
I0603 10:55:48.094045   25135 main.go:141] libmachine: () Calling .GetVersion
I0603 10:55:48.094509   25135 main.go:141] libmachine: Using API Version  1
I0603 10:55:48.094530   25135 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 10:55:48.094858   25135 main.go:141] libmachine: () Calling .GetMachineName
I0603 10:55:48.095018   25135 main.go:141] libmachine: (functional-835483) Calling .DriverName
I0603 10:55:48.095204   25135 ssh_runner.go:195] Run: systemctl --version
I0603 10:55:48.095228   25135 main.go:141] libmachine: (functional-835483) Calling .GetSSHHostname
I0603 10:55:48.097756   25135 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:48.098121   25135 main.go:141] libmachine: (functional-835483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c7:14", ip: ""} in network mk-functional-835483: {Iface:virbr1 ExpiryTime:2024-06-03 11:52:05 +0000 UTC Type:0 Mac:52:54:00:e6:c7:14 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:functional-835483 Clientid:01:52:54:00:e6:c7:14}
I0603 10:55:48.098155   25135 main.go:141] libmachine: (functional-835483) DBG | domain functional-835483 has defined IP address 192.168.39.127 and MAC address 52:54:00:e6:c7:14 in network mk-functional-835483
I0603 10:55:48.098306   25135 main.go:141] libmachine: (functional-835483) Calling .GetSSHPort
I0603 10:55:48.098457   25135 main.go:141] libmachine: (functional-835483) Calling .GetSSHKeyPath
I0603 10:55:48.098624   25135 main.go:141] libmachine: (functional-835483) Calling .GetSSHUsername
I0603 10:55:48.098776   25135 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/functional-835483/id_rsa Username:docker}
I0603 10:55:48.194943   25135 build_images.go:161] Building image from path: /tmp/build.3936784225.tar
I0603 10:55:48.194998   25135 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0603 10:55:48.215225   25135 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3936784225.tar
I0603 10:55:48.224740   25135 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3936784225.tar: stat -c "%s %y" /var/lib/minikube/build/build.3936784225.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3936784225.tar': No such file or directory
I0603 10:55:48.224767   25135 ssh_runner.go:362] scp /tmp/build.3936784225.tar --> /var/lib/minikube/build/build.3936784225.tar (3072 bytes)
I0603 10:55:48.257347   25135 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3936784225
I0603 10:55:48.283565   25135 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3936784225 -xf /var/lib/minikube/build/build.3936784225.tar
I0603 10:55:48.321299   25135 crio.go:315] Building image: /var/lib/minikube/build/build.3936784225
I0603 10:55:48.321370   25135 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-835483 /var/lib/minikube/build/build.3936784225 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0603 10:55:52.807998   25135 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-835483 /var/lib/minikube/build/build.3936784225 --cgroup-manager=cgroupfs: (4.486602741s)
I0603 10:55:52.808081   25135 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3936784225
I0603 10:55:52.837836   25135 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3936784225.tar
I0603 10:55:52.864244   25135 build_images.go:217] Built localhost/my-image:functional-835483 from /tmp/build.3936784225.tar
I0603 10:55:52.864279   25135 build_images.go:133] succeeded building to: functional-835483
I0603 10:55:52.864286   25135 build_images.go:134] failed building to: 
I0603 10:55:52.864310   25135 main.go:141] libmachine: Making call to close driver server
I0603 10:55:52.864321   25135 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:52.864655   25135 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
I0603 10:55:52.864658   25135 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:52.864722   25135 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:52.864741   25135 main.go:141] libmachine: Making call to close driver server
I0603 10:55:52.864754   25135 main.go:141] libmachine: (functional-835483) Calling .Close
I0603 10:55:52.864980   25135 main.go:141] libmachine: Successfully made call to close driver server
I0603 10:55:52.864994   25135 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 10:55:52.865024   25135 main.go:141] libmachine: (functional-835483) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
2024/06/03 10:55:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.114976254s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-835483
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr: (3.810425326s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr: (2.439378849s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.137228665s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-835483
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image load --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr: (6.812206446s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service list -o json
functional_test.go:1490: Took "377.47339ms" to run "out/minikube-linux-amd64 -p functional-835483 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.127:31376
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.127:31376
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "315.843683ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "47.792792ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdany-port2580031629/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1717412132692577933" to /tmp/TestFunctionalparallelMountCmdany-port2580031629/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1717412132692577933" to /tmp/TestFunctionalparallelMountCmdany-port2580031629/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1717412132692577933" to /tmp/TestFunctionalparallelMountCmdany-port2580031629/001/test-1717412132692577933
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.749855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  3 10:55 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  3 10:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  3 10:55 test-1717412132692577933
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh cat /mount-9p/test-1717412132692577933
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-835483 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ff547e68-21d3-4e43-8c16-951a78b5593c] Pending
helpers_test.go:344: "busybox-mount" [ff547e68-21d3-4e43-8c16-951a78b5593c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ff547e68-21d3-4e43-8c16-951a78b5593c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ff547e68-21d3-4e43-8c16-951a78b5593c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004066483s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-835483 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdany-port2580031629/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "393.787578ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "50.700386ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image save gcr.io/google-containers/addon-resizer:functional-835483 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image save gcr.io/google-containers/addon-resizer:functional-835483 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.130746508s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image rm gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.114877127s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-835483
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 image save --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-835483 image save --daemon gcr.io/google-containers/addon-resizer:functional-835483 --alsologtostderr: (1.522434335s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-835483
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdspecific-port3071232345/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.178491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdspecific-port3071232345/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "sudo umount -f /mount-9p": exit status 1 (206.745916ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-835483 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdspecific-port3071232345/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T" /mount1: exit status 1 (197.437244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-835483 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-835483 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-835483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4228226641/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-835483
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-835483
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-835483
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (274.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683480 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 10:57:12.037985   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 10:57:39.723836   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:00:19.215225   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.220596   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.230913   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.251304   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.291658   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.372030   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.532387   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:19.852978   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:20.493891   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:21.774402   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:24.334635   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:29.455440   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:00:39.695822   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-683480 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m34.122829219s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (274.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-683480 -- rollout status deployment/busybox: (4.159146814s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ldtcf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-mvpcm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ngf6n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ldtcf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-mvpcm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ngf6n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ldtcf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-mvpcm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ngf6n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ldtcf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ldtcf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-mvpcm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-mvpcm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ngf6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683480 -- exec busybox-fc5497c4f-ngf6n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-683480 -v=7 --alsologtostderr
E0603 11:01:00.176545   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:01:41.137089   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-683480 -v=7 --alsologtostderr: (44.539294061s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-683480 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp testdata/cp-test.txt ha-683480:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480:/home/docker/cp-test.txt ha-683480-m02:/home/docker/cp-test_ha-683480_ha-683480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test_ha-683480_ha-683480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480:/home/docker/cp-test.txt ha-683480-m03:/home/docker/cp-test_ha-683480_ha-683480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test_ha-683480_ha-683480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480:/home/docker/cp-test.txt ha-683480-m04:/home/docker/cp-test_ha-683480_ha-683480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test_ha-683480_ha-683480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp testdata/cp-test.txt ha-683480-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m02:/home/docker/cp-test.txt ha-683480:/home/docker/cp-test_ha-683480-m02_ha-683480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test_ha-683480-m02_ha-683480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m02:/home/docker/cp-test.txt ha-683480-m03:/home/docker/cp-test_ha-683480-m02_ha-683480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test_ha-683480-m02_ha-683480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m02:/home/docker/cp-test.txt ha-683480-m04:/home/docker/cp-test_ha-683480-m02_ha-683480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test_ha-683480-m02_ha-683480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp testdata/cp-test.txt ha-683480-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt ha-683480:/home/docker/cp-test_ha-683480-m03_ha-683480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test_ha-683480-m03_ha-683480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt ha-683480-m02:/home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test_ha-683480-m03_ha-683480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m03:/home/docker/cp-test.txt ha-683480-m04:/home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test_ha-683480-m03_ha-683480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp testdata/cp-test.txt ha-683480-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1985816295/001/cp-test_ha-683480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt ha-683480:/home/docker/cp-test_ha-683480-m04_ha-683480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480 "sudo cat /home/docker/cp-test_ha-683480-m04_ha-683480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt ha-683480-m02:/home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m02 "sudo cat /home/docker/cp-test_ha-683480-m04_ha-683480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 cp ha-683480-m04:/home/docker/cp-test.txt ha-683480-m03:/home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 ssh -n ha-683480-m03 "sudo cat /home/docker/cp-test_ha-683480-m04_ha-683480-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.482806689s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (314.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683480 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 11:15:19.215731   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:16:42.259569   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
E0603 11:17:12.037778   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-683480 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m14.112587837s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (314.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-683480 --control-plane -v=7 --alsologtostderr
E0603 11:20:19.213700   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-683480 --control-plane -v=7 --alsologtostderr: (1m12.485252958s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-683480 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-077573 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0603 11:22:12.038462   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-077573 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.830305946s)
--- PASS: TestJSONOutput/start/Command (55.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-077573 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-077573 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-077573 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-077573 --output=json --user=testUser: (7.361363423s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-902456 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-902456 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.725642ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"831cb05f-b795-408b-b0f9-8a908401da91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-902456] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f986ac5a-86a2-4571-ac64-cc30cbbeaac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19008"}}
	{"specversion":"1.0","id":"8bcd07fd-c543-4e76-a7c7-0c747c4aa8ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb5bbf47-ee30-497b-9463-8b57e361f52b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig"}}
	{"specversion":"1.0","id":"c7f87587-71af-4da1-a630-701df2c8ac83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube"}}
	{"specversion":"1.0","id":"ae29e17c-f79e-4ad7-9aa0-7016cfebc598","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"420be255-5582-4062-b40e-c71349f09d13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a440e06-9616-46ef-90dc-83c39b7ca041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-902456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-902456
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-581773 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-581773 --driver=kvm2  --container-runtime=crio: (43.656245521s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-584138 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-584138 --driver=kvm2  --container-runtime=crio: (40.241480919s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-581773
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-584138
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-584138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-584138
helpers_test.go:175: Cleaning up "first-581773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-581773
--- PASS: TestMinikubeProfile (86.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-786412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-786412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.26768053s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-786412 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-786412 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-798878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-798878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.78204947s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-786412 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-798878
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-798878: (1.267862244s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-798878
E0603 11:25:15.086793   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:25:19.212981   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-798878: (21.58435368s)
--- PASS: TestMountStart/serial/RestartStopped (22.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798878 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.560649198s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-505550 -- rollout status deployment/busybox: (4.652215153s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-mjlbn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-nrpnb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-mjlbn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-nrpnb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-mjlbn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-nrpnb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-mjlbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-mjlbn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-nrpnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505550 -- exec busybox-fc5497c4f-nrpnb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-505550 -v 3 --alsologtostderr
E0603 11:27:12.038008   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-505550 -v 3 --alsologtostderr: (40.539393849s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-505550 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp testdata/cp-test.txt multinode-505550:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550:/home/docker/cp-test.txt multinode-505550-m02:/home/docker/cp-test_multinode-505550_multinode-505550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test_multinode-505550_multinode-505550-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550:/home/docker/cp-test.txt multinode-505550-m03:/home/docker/cp-test_multinode-505550_multinode-505550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test_multinode-505550_multinode-505550-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp testdata/cp-test.txt multinode-505550-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt multinode-505550:/home/docker/cp-test_multinode-505550-m02_multinode-505550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test_multinode-505550-m02_multinode-505550.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m02:/home/docker/cp-test.txt multinode-505550-m03:/home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test_multinode-505550-m02_multinode-505550-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp testdata/cp-test.txt multinode-505550-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202875871/001/cp-test_multinode-505550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt multinode-505550:/home/docker/cp-test_multinode-505550-m03_multinode-505550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550 "sudo cat /home/docker/cp-test_multinode-505550-m03_multinode-505550.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 cp multinode-505550-m03:/home/docker/cp-test.txt multinode-505550-m02:/home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 ssh -n multinode-505550-m02 "sudo cat /home/docker/cp-test_multinode-505550-m03_multinode-505550-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-505550 node stop m03: (1.546943044s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505550 status: exit status 7 (397.10717ms)

                                                
                                                
-- stdout --
	multinode-505550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-505550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-505550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr: exit status 7 (396.972854ms)

                                                
                                                
-- stdout --
	multinode-505550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-505550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-505550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:28:00.266562   43297 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:28:00.266676   43297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:28:00.266685   43297 out.go:304] Setting ErrFile to fd 2...
	I0603 11:28:00.266689   43297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:28:00.266859   43297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:28:00.267083   43297 out.go:298] Setting JSON to false
	I0603 11:28:00.267111   43297 mustload.go:65] Loading cluster: multinode-505550
	I0603 11:28:00.267206   43297 notify.go:220] Checking for updates...
	I0603 11:28:00.267527   43297 config.go:182] Loaded profile config "multinode-505550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:28:00.267543   43297 status.go:255] checking status of multinode-505550 ...
	I0603 11:28:00.267902   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.267985   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.286381   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0603 11:28:00.286727   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.287250   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.287278   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.287669   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.287871   43297 main.go:141] libmachine: (multinode-505550) Calling .GetState
	I0603 11:28:00.289378   43297 status.go:330] multinode-505550 host status = "Running" (err=<nil>)
	I0603 11:28:00.289393   43297 host.go:66] Checking if "multinode-505550" exists ...
	I0603 11:28:00.289667   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.289702   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.304071   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0603 11:28:00.304443   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.304889   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.304909   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.305172   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.305314   43297 main.go:141] libmachine: (multinode-505550) Calling .GetIP
	I0603 11:28:00.307861   43297 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:28:00.308255   43297 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:28:00.308272   43297 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:28:00.308429   43297 host.go:66] Checking if "multinode-505550" exists ...
	I0603 11:28:00.308690   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.308726   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.324861   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0603 11:28:00.325175   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.325636   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.325657   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.325935   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.326113   43297 main.go:141] libmachine: (multinode-505550) Calling .DriverName
	I0603 11:28:00.326306   43297 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:28:00.326339   43297 main.go:141] libmachine: (multinode-505550) Calling .GetSSHHostname
	I0603 11:28:00.328647   43297 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:28:00.329006   43297 main.go:141] libmachine: (multinode-505550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:25:37 +0000 UTC Type:0 Mac:52:54:00:d9:78:ff Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-505550 Clientid:01:52:54:00:d9:78:ff}
	I0603 11:28:00.329031   43297 main.go:141] libmachine: (multinode-505550) DBG | domain multinode-505550 has defined IP address 192.168.39.232 and MAC address 52:54:00:d9:78:ff in network mk-multinode-505550
	I0603 11:28:00.329159   43297 main.go:141] libmachine: (multinode-505550) Calling .GetSSHPort
	I0603 11:28:00.329330   43297 main.go:141] libmachine: (multinode-505550) Calling .GetSSHKeyPath
	I0603 11:28:00.329493   43297 main.go:141] libmachine: (multinode-505550) Calling .GetSSHUsername
	I0603 11:28:00.329633   43297 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550/id_rsa Username:docker}
	I0603 11:28:00.406473   43297 ssh_runner.go:195] Run: systemctl --version
	I0603 11:28:00.412592   43297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:28:00.426913   43297 kubeconfig.go:125] found "multinode-505550" server: "https://192.168.39.232:8443"
	I0603 11:28:00.426940   43297 api_server.go:166] Checking apiserver status ...
	I0603 11:28:00.426966   43297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 11:28:00.439601   43297 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	W0603 11:28:00.448076   43297 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 11:28:00.448108   43297 ssh_runner.go:195] Run: ls
	I0603 11:28:00.452345   43297 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0603 11:28:00.456438   43297 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0603 11:28:00.456456   43297 status.go:422] multinode-505550 apiserver status = Running (err=<nil>)
	I0603 11:28:00.456465   43297 status.go:257] multinode-505550 status: &{Name:multinode-505550 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:28:00.456479   43297 status.go:255] checking status of multinode-505550-m02 ...
	I0603 11:28:00.456749   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.456784   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.471953   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0603 11:28:00.472330   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.472722   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.472740   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.473031   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.473207   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetState
	I0603 11:28:00.474609   43297 status.go:330] multinode-505550-m02 host status = "Running" (err=<nil>)
	I0603 11:28:00.474624   43297 host.go:66] Checking if "multinode-505550-m02" exists ...
	I0603 11:28:00.474998   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.475064   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.489362   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0603 11:28:00.489690   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.490108   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.490137   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.490404   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.490575   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetIP
	I0603 11:28:00.493104   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | domain multinode-505550-m02 has defined MAC address 52:54:00:4b:a3:ff in network mk-multinode-505550
	I0603 11:28:00.493496   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a3:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:26:38 +0000 UTC Type:0 Mac:52:54:00:4b:a3:ff Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-505550-m02 Clientid:01:52:54:00:4b:a3:ff}
	I0603 11:28:00.493525   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | domain multinode-505550-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:4b:a3:ff in network mk-multinode-505550
	I0603 11:28:00.493674   43297 host.go:66] Checking if "multinode-505550-m02" exists ...
	I0603 11:28:00.494048   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.494080   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.507819   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0603 11:28:00.508221   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.508626   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.508645   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.508931   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.509083   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .DriverName
	I0603 11:28:00.509263   43297 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 11:28:00.509284   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetSSHHostname
	I0603 11:28:00.511991   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | domain multinode-505550-m02 has defined MAC address 52:54:00:4b:a3:ff in network mk-multinode-505550
	I0603 11:28:00.512402   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a3:ff", ip: ""} in network mk-multinode-505550: {Iface:virbr1 ExpiryTime:2024-06-03 12:26:38 +0000 UTC Type:0 Mac:52:54:00:4b:a3:ff Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-505550-m02 Clientid:01:52:54:00:4b:a3:ff}
	I0603 11:28:00.512422   43297 main.go:141] libmachine: (multinode-505550-m02) DBG | domain multinode-505550-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:4b:a3:ff in network mk-multinode-505550
	I0603 11:28:00.512557   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetSSHPort
	I0603 11:28:00.512714   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetSSHKeyPath
	I0603 11:28:00.512834   43297 main.go:141] libmachine: (multinode-505550-m02) Calling .GetSSHUsername
	I0603 11:28:00.512971   43297 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19008-7755/.minikube/machines/multinode-505550-m02/id_rsa Username:docker}
	I0603 11:28:00.590557   43297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 11:28:00.604174   43297 status.go:257] multinode-505550-m02 status: &{Name:multinode-505550-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0603 11:28:00.604212   43297 status.go:255] checking status of multinode-505550-m03 ...
	I0603 11:28:00.604531   43297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 11:28:00.604573   43297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 11:28:00.619460   43297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I0603 11:28:00.619880   43297 main.go:141] libmachine: () Calling .GetVersion
	I0603 11:28:00.620322   43297 main.go:141] libmachine: Using API Version  1
	I0603 11:28:00.620342   43297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 11:28:00.620689   43297 main.go:141] libmachine: () Calling .GetMachineName
	I0603 11:28:00.620824   43297 main.go:141] libmachine: (multinode-505550-m03) Calling .GetState
	I0603 11:28:00.622277   43297 status.go:330] multinode-505550-m03 host status = "Stopped" (err=<nil>)
	I0603 11:28:00.622288   43297 status.go:343] host is not running, skipping remaining checks
	I0603 11:28:00.622294   43297 status.go:257] multinode-505550-m03 status: &{Name:multinode-505550-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-505550 node start m03 -v=7 --alsologtostderr: (28.493468962s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-505550 node delete m03: (1.665579536s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (202.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505550 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 11:37:12.037782   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505550 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.784142285s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505550 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (202.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505550
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505550-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-505550-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.907639ms)

                                                
                                                
-- stdout --
	* [multinode-505550-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-505550-m02' is duplicated with machine name 'multinode-505550-m02' in profile 'multinode-505550'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505550-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505550-m03 --driver=kvm2  --container-runtime=crio: (44.442249919s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-505550
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-505550: exit status 80 (197.26755ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-505550 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-505550-m03 already exists in multinode-505550-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-505550-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.45s)

                                                
                                    
x
+
TestScheduledStopUnix (116.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-435255 --memory=2048 --driver=kvm2  --container-runtime=crio
E0603 11:45:19.215352   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-435255 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.07715123s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435255 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-435255 -n scheduled-stop-435255
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435255 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435255 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435255 -n scheduled-stop-435255
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-435255
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435255 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-435255
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-435255: exit status 7 (60.191219ms)

                                                
                                                
-- stdout --
	scheduled-stop-435255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435255 -n scheduled-stop-435255
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435255 -n scheduled-stop-435255: exit status 7 (64.023779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-435255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-435255
--- PASS: TestScheduledStopUnix (116.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.752473974 start -p running-upgrade-165058 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0603 11:47:12.037550   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.752473974 start -p running-upgrade-165058 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.284572167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-165058 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-165058 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m37.582180475s)
helpers_test.go:175: Cleaning up "running-upgrade-165058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-165058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-165058: (1.196762076s)
--- PASS: TestRunningBinaryUpgrade (223.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.046209ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-154116] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-154116 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-154116 --driver=kvm2  --container-runtime=crio: (1m33.026660499s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-154116 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-034991 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-034991 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.6535ms)

                                                
                                                
-- stdout --
	* [false-034991] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 11:47:52.453082   51948 out.go:291] Setting OutFile to fd 1 ...
	I0603 11:47:52.453346   51948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:47:52.453355   51948 out.go:304] Setting ErrFile to fd 2...
	I0603 11:47:52.453359   51948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 11:47:52.453592   51948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19008-7755/.minikube/bin
	I0603 11:47:52.454156   51948 out.go:298] Setting JSON to false
	I0603 11:47:52.455141   51948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5417,"bootTime":1717409855,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 11:47:52.455196   51948 start.go:139] virtualization: kvm guest
	I0603 11:47:52.457388   51948 out.go:177] * [false-034991] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 11:47:52.458627   51948 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 11:47:52.458629   51948 notify.go:220] Checking for updates...
	I0603 11:47:52.459799   51948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 11:47:52.461071   51948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19008-7755/kubeconfig
	I0603 11:47:52.462271   51948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19008-7755/.minikube
	I0603 11:47:52.463759   51948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 11:47:52.464986   51948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 11:47:52.466626   51948 config.go:182] Loaded profile config "NoKubernetes-154116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:47:52.466739   51948 config.go:182] Loaded profile config "offline-crio-125275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 11:47:52.466834   51948 config.go:182] Loaded profile config "running-upgrade-165058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0603 11:47:52.466934   51948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 11:47:52.503356   51948 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 11:47:52.504674   51948 start.go:297] selected driver: kvm2
	I0603 11:47:52.504693   51948 start.go:901] validating driver "kvm2" against <nil>
	I0603 11:47:52.504709   51948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 11:47:52.506844   51948 out.go:177] 
	W0603 11:47:52.508091   51948 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0603 11:47:52.509231   51948 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-034991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.33:8443
name: offline-crio-125275
contexts:
- context:
cluster: offline-crio-125275
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-125275
name: offline-crio-125275
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-125275
user:
client-certificate: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.crt
client-key: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-034991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034991"

                                                
                                                
----------------------- debugLogs end: false-034991 [took: 2.731865788s] --------------------------------
helpers_test.go:175: Cleaning up "false-034991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-034991
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestPause/serial/Start (70.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-588037 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-588037 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m10.952570852s)
--- PASS: TestPause/serial/Start (70.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.84282363s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-154116 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-154116 status -o json: exit status 2 (262.050624ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-154116","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-154116
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-154116 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.738806423s)
--- PASS: TestNoKubernetes/serial/Start (52.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (70.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-588037 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-588037 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.178402217s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (70.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-154116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-154116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.40483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.923784708s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0603 11:50:02.260668   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.345353371s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-154116
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-154116: (1.370556752s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-154116 --driver=kvm2  --container-runtime=crio
E0603 11:50:19.213404   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-154116 --driver=kvm2  --container-runtime=crio: (21.343968188s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-588037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-588037 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-588037 --output=json --layout=cluster: exit status 2 (267.509665ms)

                                                
                                                
-- stdout --
	{"Name":"pause-588037","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-588037","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-588037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-588037 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-588037 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.411203612s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-154116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-154116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.506624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.967798703 start -p stopped-upgrade-258172 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.967798703 start -p stopped-upgrade-258172 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m16.369048828s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.967798703 -p stopped-upgrade-258172 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.967798703 -p stopped-upgrade-258172 stop: (2.610929366s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-258172 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-258172 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.126822172s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (124.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0603 11:52:12.037643   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m4.622413717s)
--- PASS: TestNetworkPlugins/group/auto/Start (124.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.464597274s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-258172
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.298183051s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5srdz" [f2a6d393-d0c3-4933-aabb-53de490b5742] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5srdz" [f2a6d393-d0c3-4933-aabb-53de490b5742] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007333532s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.422691304s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b2cld" [4f48b4bf-2b26-4c58-b189-0167f55a5df9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b2cld" [4f48b4bf-2b26-4c58-b189-0167f55a5df9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004882588s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sqt6k" [617109da-62a2-4034-a58d-82439c37a8be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004459054s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-slh6f" [a6b3a0b8-abf1-4b30-8192-c1685ffe11df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-slh6f" [a6b3a0b8-abf1-4b30-8192-c1685ffe11df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004543348s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.800231467s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (110.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0603 11:55:19.213395   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/functional-835483/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m50.826894389s)
--- PASS: TestNetworkPlugins/group/bridge/Start (110.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dl58b" [5b822105-2915-4b1b-8125-86831f4fa6c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004812142s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-snwzd" [048a0b5e-51ff-498b-81b5-4997f987bf20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-snwzd" [048a0b5e-51ff-498b-81b5-4997f987bf20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004711195s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-034991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m42.389495981s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-22xfv" [1e239657-ce82-4b7b-828f-e84dd67b4de5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-22xfv" [1e239657-ce82-4b7b-828f-e84dd67b4de5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003827027s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5lwv6" [003a64ed-d2c3-4f0e-9177-3c4598c492b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5lwv6" [003a64ed-d2c3-4f0e-9177-3c4598c492b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004927752s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (134.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-602118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-602118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (2m14.404223299s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (134.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-725022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-725022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m35.516701285s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rmwg6" [8a0c0513-5f15-4737-a51f-4b96841f4342] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006444189s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-034991 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-034991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qhf7r" [f1dd8c6e-52f0-4353-ada3-6de5cd86e2f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qhf7r" [f1dd8c6e-52f0-4353-ada3-6de5cd86e2f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003864278s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-034991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-034991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)
E0603 12:27:12.037949   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-196710 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 11:58:35.089941   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/addons-926744/client.crt: no such file or directory
E0603 11:58:48.589736   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.595659   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.605938   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.626189   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.666446   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.746761   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:48.907150   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:49.227291   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:49.867815   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:51.148921   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:53.709688   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
E0603 11:58:58.830035   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-196710 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (59.644752525s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-725022 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b8eb2eb1-621e-4c43-ab5d-4c13e4471b3e] Pending
helpers_test.go:344: "busybox" [b8eb2eb1-621e-4c43-ab5d-4c13e4471b3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b8eb2eb1-621e-4c43-ab5d-4c13e4471b3e] Running
E0603 11:59:09.071184   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004264623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-725022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-725022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-725022 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-602118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75eb7e29-e000-4893-a095-c2e1a4c10117] Pending
helpers_test.go:344: "busybox" [75eb7e29-e000-4893-a095-c2e1a4c10117] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0603 11:59:24.404721   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.409952   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.420164   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.440423   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.480695   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.561085   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:24.721780   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:25.042688   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:25.683702   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
helpers_test.go:344: "busybox" [75eb7e29-e000-4893-a095-c2e1a4c10117] Running
E0603 11:59:26.964630   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:29.525288   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 11:59:29.551496   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/auto-034991/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003752589s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-602118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8689f908-a916-44dd-94b5-69cf74730b86] Pending
helpers_test.go:344: "busybox" [8689f908-a916-44dd-94b5-69cf74730b86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8689f908-a916-44dd-94b5-69cf74730b86] Running
E0603 11:59:36.736827   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 11:59:39.297144   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004024529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-602118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-602118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-196710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-196710 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (681.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-725022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 12:01:52.496619   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-725022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (11m21.678702333s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725022 -n embed-certs-725022
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (681.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (614.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-602118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 12:02:08.249506   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/custom-flannel-034991/client.crt: no such file or directory
E0603 12:02:09.369635   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-602118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m14.497030619s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-602118 -n no-preload-602118
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (614.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (606.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-196710 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 12:02:18.022005   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/kindnet-034991/client.crt: no such file or directory
E0603 12:02:19.610349   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:02:40.090803   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/bridge-034991/client.crt: no such file or directory
E0603 12:02:53.938453   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/enable-default-cni-034991/client.crt: no such file or directory
E0603 12:02:54.814923   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:54.820209   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:54.830445   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:54.850706   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:54.891112   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:54.971438   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:55.131566   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:55.452159   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
E0603 12:02:56.093132   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-196710 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m5.824637943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-196710 -n default-k8s-diff-port-196710
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (606.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-905554 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-905554 --alsologtostderr -v=3: (2.447717005s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554
E0603 12:02:59.934685   15028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/calico-034991/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905554 -n old-k8s-version-905554: exit status 7 (61.930989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-905554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-756935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-756935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (56.825151869s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-756935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-756935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.255279367s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-756935 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-756935 --alsologtostderr -v=3: (7.334213188s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756935 -n newest-cni-756935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756935 -n newest-cni-756935: exit status 7 (58.918012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-756935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-756935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-756935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (34.018487521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756935 -n newest-cni-756935
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-756935 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-756935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756935 -n newest-cni-756935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756935 -n newest-cni-756935: exit status 2 (233.656081ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756935 -n newest-cni-756935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756935 -n newest-cni-756935: exit status 2 (227.794503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-756935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756935 -n newest-cni-756935
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756935 -n newest-cni-756935
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 2.61
263 TestNetworkPlugins/group/cilium 3.82
272 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-034991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.33:8443
name: offline-crio-125275
contexts:
- context:
cluster: offline-crio-125275
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-125275
name: offline-crio-125275
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-125275
user:
client-certificate: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.crt
client-key: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-034991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034991"

                                                
                                                
----------------------- debugLogs end: kubenet-034991 [took: 2.476066613s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-034991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-034991
--- SKIP: TestNetworkPlugins/group/kubenet (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-034991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-034991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19008-7755/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.33:8443
name: offline-crio-125275
contexts:
- context:
cluster: offline-crio-125275
extensions:
- extension:
last-update: Mon, 03 Jun 2024 11:47:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-125275
name: offline-crio-125275
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-125275
user:
client-certificate: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.crt
client-key: /home/jenkins/minikube-integration/19008-7755/.minikube/profiles/offline-crio-125275/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-034991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-034991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034991"

                                                
                                                
----------------------- debugLogs end: cilium-034991 [took: 3.655766476s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-034991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-034991
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-231568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-231568
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard